WorldWideScience

Sample records for model low-altitude component

  1. Current leakage for low altitude satellites: modeling applications

    International Nuclear Information System (INIS)

    Konradi, A.; Mccoy, J.E.; Garriott, O.K.

    1979-01-01

    To simulate the behavior of a high voltage solar cell array in the ionospheric plasma environment, the large (90 ft x 55 ft diameter) vacuum chamber was used to measure the high-voltage plasma interactions of a 3 ft x 30 ft conductive panel. The chamber was filled with nitrogen and argon plasma at electron densities of up to 1,000,000 per cu cm. Measurements of current flow to the plasma were made in three configurations: (a) with one end of the panel grounded, (b) with the whole panel floating while a high bias was applied between the ends of the panel, and (c) with the whole panel at high negative voltage with respect to the chamber walls. The results indicate that a simple model with a constant panel conductivity and plasma resistance can adequately describe the voltage distribution along the panel and the plasma current flow. As expected, when a high potential difference is applied to the panel ends more than 95% of the panel floats negative with respect to the plasma

  2. Atmospheric drag model for Cassini orbit determination during low altitude Titan flybys

    Science.gov (United States)

    Pelletier, F. J.; Antreasian, P. G.; Bordi, J. J.; Criddle, K. E.; Ionasescu, R.; Jacobson, R. A.; Mackenzie, R. A.; Parcher, D. W.; Stauch, J. R.

    2006-01-01

    On April 16, 2005, the Cassini spacecraft performed its lowest altitude flyby of Titan to date, the Titan-5 flyby, flying 1027 km above the surface of Titan. This document discusses the development of a Titan atmospheric drag model for the purpose of the orbit determination of Cassini. Results will be presented for the Titan A flyby, the Titan-5 flyby as well as the most recent low altitude Titan flyby, Titan-7. Different solutions will be compared against OD performance in terms of the flyby B-plane parameters, spacecraft thrusting activity and drag estimates. These low altitude Titan flybys were an excellent opportunity to observe the effect of Titan's atmospheric drag on the orbit determination solution and results show that the drag was successfully modeled to provide accurate flyby solutions.

  3. Math modeling for helicopter simulation of low speed, low altitude and steeply descending flight

    Science.gov (United States)

    Sheridan, P. F.; Robinson, C.; Shaw, J.; White, F.

    1982-01-01

    A math model was formulated to represent some of the aerodynamic effects of low speed, low altitude, and steeply descending flight. The formulation is intended to be consistent with the single rotor real time simulation model at NASA Ames Research Center. The effect of low speed, low altitude flight on main rotor downwash was obtained by assuming a uniform plus first harmonic inflow model and then by using wind tunnel data in the form of hub loads to solve for the inflow coefficients. The result was a set of tables for steady and first harmonic inflow coefficients as functions of ground proximity, angle of attack, and airspeed. The aerodynamics associated with steep descending flight in the vortex ring state were modeled by replacing the steady induced downwash derived from momentum theory with an experimentally derived value and by including a thrust fluctuations effect due to vortex shedding. Tables of the induced downwash and the magnitude of the thrust fluctuations were created as functions of angle of attack and airspeed.

  4. Two-Dimensional FCT Model of Low-Altitude Nuclear Effects.

    Science.gov (United States)

    1980-10-16

    medium, and preliminary height-of-burst ( HoB ) calculations. A. Reflecting Shock in a Reactive Medium Under a Navy-supported program to study combustion...hydrodynamics, LCP has developed a simple numerical treatment of coinustion processes based on 12 the induction time hypothesis. This model represents...the chemistry through a composite process, in which reactants begin to combine into combustion products only after a finite " induction " time has

  5. Dynamics modeling and control of a transport aircraft for ultra-low altitude airdrop

    Directory of Open Access Journals (Sweden)

    Liu Ri

    2015-04-01

    Full Text Available The nonlinear aircraft model with heavy cargo moving inside is derived by using the separation body method, which can describe the influence of the moving cargo on the aircraft attitude and altitude accurately. Furthermore, the nonlinear system is decoupled and linearized through the input–output feedback linearization method. On this basis, an iterative quasi-sliding mode (SM flight controller for speed and pitch angle control is proposed. At the first-level SM, a global dynamic switching function is introduced thus eliminating the reaching phase of the sliding motion. At the second-level SM, a nonlinear function with the property of “smaller errors correspond to bigger gains and bigger errors correspond to saturated gains” is designed to form an integral sliding manifold, and the overcompensation of the integral term to big errors is weakened. Lyapunov-based analysis shows that the controller with strong robustness can reject both constant and time-varying model uncertainties. The performance of the proposed control strategy is verified in a maximum load airdrop mission.

  6. Measurement of Low-Altitude Infrared Transmission

    National Research Council Canada - National Science Library

    Zeisse, C

    1999-01-01

    Infrared propagation at low altitudes is determined by extinction caused by molecules, aerosol particles, and ray bending by refraction, three effects that control the mean value of the signal (the transmission...

  7. Low-Altitude Operation of Unmanned Rotorcraft

    Science.gov (United States)

    Scherer, Sebastian

    Currently deployed unmanned rotorcraft rely on preplanned missions or teleoperation and do not actively incorporate information about obstacles, landing sites, wind, position uncertainty, and other aerial vehicles during online motion planning. Prior work has successfully addressed some tasks such as obstacle avoidance at slow speeds, or landing at known to be good locations. However, to enable autonomous missions in cluttered environments, the vehicle has to react quickly to previously unknown obstacles, respond to changing environmental conditions, and find unknown landing sites. We consider the problem of enabling autonomous operation at low-altitude with contributions to four problems. First we address the problem of fast obstacle avoidance for a small aerial vehicle and present results from over a 1000 rims at speeds up to 10 m/s. Fast response is achieved through a reactive algorithm whose response is learned based on observing a pilot. Second, we show an algorithm to update the obstacle cost expansion for path planning quickly and demonstrate it on a micro aerial vehicle, and an autonomous helicopter avoiding obstacles. Next, we examine the mission of finding a place to land near a ground goal. Good landing sites need to be detected and found and the final touch down goal is unknown. To detect the landing sites we convey a model based algorithm for landing sites that incorporates many helicopter relevant constraints such as landing sites, approach, abort, and ground paths in 3D range data. The landing site evaluation algorithm uses a patch-based coarse evaluation for slope and roughness, and a fine evaluation that fits a 3D model of the helicopter and landing gear to calculate a goodness measure. The data are evaluated in real-time to enable the helicopter to decide on a place to land. We show results from urban, vegetated, and desert environments, and demonstrate the first autonomous helicopter that selects its own landing sites. We present a generalized

  8. Analysis of the low-altitude proton flux asymmetry: methodology

    CERN Document Server

    Kruglanski, M

    1999-01-01

    Existing East-West asymmetry models of the trapped proton fluxes at low altitudes depend on the local magnetic dip angle and a density scale height derived from atmospheric models. We propose an alternative approach which maps the directional flux over a drift shell (B sub m , L) in terms of the local pitch and azimuthal angles alpha and beta, where beta is defined in the local mirror plane as the angle between the proton arrival direction and the surface normal to the drift shell. This approach has the advantage that it only depends on drift shell parameters and does not involve an atmosphere model. A semi-empirical model based on the new methodology is able to reproduce the angular distribution of a set of SAMPEX/PET proton flux measurements. Guidelines are proposed for spacecraft missions and data analysis procedures that are intended to be used for the building of new trapped radiation environment models.

  9. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  10. Low-Altitude Distribution of Radiation Belt Electrons

    National Research Council Canada - National Science Library

    Selesnick, R. S; Looper, M. D; Albert, J. M

    2004-01-01

    A numerical simulation of the low-altitude electron radiation belt is described. It includes dependences on the electron's bounce and drift phases, equatorial pitch angle, and kinetic energy in the range of 1 to several MeV at L = 3.5...

  11. Some low-altitude cusp dependencies on the interplanetary magnetic field

    International Nuclear Information System (INIS)

    Newell, P.T.; Meng, C.; Sibeck, D.G.; Lepping, R.

    1989-01-01

    Although it has become well established that the low-altitude polar cusp moves equatorward during intervals of southward interplanetary magnetic field (IMF B z y negative (positive) in the northern (southern) hemisphere and postnoon for B y positive (negative) in the northern (southern) hemisphere. The B y induced shift is much more pronounced for southward than for northward B z , a result that appears to be consistent with elementary considerations from, for example, the antiparallel merging model. No interhemispherical latitudinal differences in cusp positions were found that could be attributed to the IMF B x component. As expected, the cusp latitudinal position correlated reasonably well (0.70) with B z when the IMF had a southward component; the previously much less investigated correlation for B z northward proved to be only 0.18, suggestive of a half-wave rectifier effect. The ratio of cusp ion number flux precipitation for B z southward to that for B z northward was 1.75±0.12. The statistical local time (full) width of the cusp proper was found to be 2.1 hours for B z northward and 2.8 hours for B z southward. copyright American Geophysical Union 1989

  12. Low-Altitude Airbursts and the Impact Threat - Final LDRD Report.

    Energy Technology Data Exchange (ETDEWEB)

    Boslough, Mark B.; Crawford, David A.

    2007-12-01

    The purpose of this nine-week project was to advance the understanding of low-altitude airbursts by developing the means to model them at extremely high resolution in order to span the scales of entry physics as well as blast wave and plume formation. Small asteroid impacts on Earth are a recognized hazard, but the full nature of the threat is still not well understood. We used shock physics codes to discover emergent phenomena associated with low-altitude airbursts such as the Siberian Tunguska event of 1908 and the Egyptian glass-forming event 29 million years ago. The planetary defense community is beginning to recognize the significant threat from such airbursts. Low-altitude airbursts are the only class of impacts that have a significant probability of occurring within a planning time horizon. There is roughly a 10% chance of a megaton-scale low-altitude airburst event in the next decade.The first part of this LDRD final project report is a preprint of our proceedings paper associated with the plenary presentation at the Hypervelocity Impact Society 2007 Symposium in Williamsburg, Virginia (International Journal of Impact Engineering, in press). The paper summarizes discoveries associated with a series of 2D axially-symmetric CTH simulations. The second part of the report contains slides from an invited presentation at the American Geophysical Union Fall 2007 meeting in San Francisco. The presentation summarizes the results of a series of 3D oblique impact simulations of the 1908 Tunguska explosion. Because of the brevity of this late-start project, the 3D results have not yet been written up for a peer-reviewed publication. We anticipate the opportunity to eventually run simulations that include the actual topography at Tunguska, at which time these results will be published.3

  13. Propagation of whistler mode chorus to low altitudes: Spacecraft observations of structured ELF hiss

    Science.gov (United States)

    SantolíK, O.; Chum, J.; Parrot, M.; Gurnett, D. A.; Pickett, J. S.; Cornilleau-Wehrlin, N.

    2006-10-01

    We interpret observations of low-altitude electromagnetic ELF hiss observed on the dayside at subauroral latitudes. A divergent propagation pattern has been reported between 50° and 75° of geomagnetic latitude. The waves propagate with downward directed wave vectors which are slightly equatorward inclined at lower magnetic latitudes and slightly poleward inclined at higher latitudes. Reverse ray tracing using different plasma density models indicates a possible source region near the geomagnetic equator at a radial distance between 5 and 7 Earth radii by a mechanism acting on highly oblique wave vectors near the local Gendrin angle. We analyze waveforms received at altitudes of 700-1200 km by the Freja and DEMETER spacecraft and we find that low-altitude ELF hiss contains discrete time-frequency structures resembling wave packets of whistler mode chorus. Emissions of chorus also predominantly occur on the dawnside and dayside and have recently been considered as a possible source of highly accelerated electrons in the outer Van Allen radiation belt. Detailed measurements of the Cluster spacecraft at radial distances of 4-5 Earth radii show chorus propagating downward from the source region localized close to the equator. The time-frequency structure and frequencies of chorus observed by Cluster along the reverse raypaths of ELF hiss are consistent with the hypothesis that the frequently observed dayside ELF hiss is just the low-altitude manifestation of natural magnetospheric emissions of whistler mode chorus.

  14. Source of the low-altitude hiss in the ionosphere

    Czech Academy of Sciences Publication Activity Database

    Chen, L.; Santolík, Ondřej; Hájoš, Mychajlo; Zheng, L.; Zhima, Z.; Heelis, R.; Hanzelka, Miroslav; Horne, R. B.; Parrot, M.

    2017-01-01

    Roč. 44, č. 5 (2017), s. 2060-2069 ISSN 0094-8276 R&D Projects: GA ČR(CZ) GA17-07027S; GA MŠk(CZ) LH15304 Grant - others:AV ČR(CZ) AP1401 Program:Akademická prémie - Praemium Academiae Institutional support: RVO:68378289 Keywords : ionospheric hiss * low-altitude hiss * plasmaspheric hiss * ray tracing Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 4.253, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/2016GL072181/full

  15. APPLICATION OF UAV SYSTEM FOR LOW ALTITUDE PHOTOGRAMMETRY IN SHANXI

    Directory of Open Access Journals (Sweden)

    C. Junqing

    2012-07-01

    Full Text Available Recent years, as the urgent demands of the state and society for high-resolution aerial images and large-scale DLG (Digital Line Graphic, UAV borne low-altitude Photogrammetry system are used more and more widely. Combining the application of UAV system in Shanxi for collecting the 1:1000 scale DLG, in this paper, the main steps and key technologies of UAV system for lower altitude aerial photogrammetry are introduced. In this passage, we took an area of Shanxi as the survey area, acquired 1024 aerial images of the survey area. After the calculation of aerial triangulation, we get the plane accuracy of the encrypted points is 0.21 m and the height accuracy of encrypted points is 0.35 m, could meet the accuracy of 1:1000 scale mapping. It can be seen that the UAV system for low altitude photogrammetry has its own advantages in acquiring high resolution aerial images and large scale DLG, and UAV system has a great development prospects.

  16. The Study of a Super Low Altitude Satellite

    Science.gov (United States)

    Noda, Atsushi; Homma, Masanori; Utashima, Masayoshi

    This paper reports the result of a study for super low altitude satellite. The altitude of this satellite's orbit is lower than ever. The altitude of a conventional earth observing satellite is generally around from 600km to 900km. The lowest altitude of earth observing satellite launched in Japan was 350km; the Tropical Rainfall Measuring Mission (TRMM). By comparison, the satellite reported in this paper is much lower than that and it is planned to orbit below 200km. Furthermore, the duration of the flight planned is more than two years. Any satellite in the world has not achieved to keep such a low altitude that long term. The satellite in such a low orbit drops quickly because of the strong air drag. Our satellite will cancel the air drag effect by ion engine thrust. To realize this idea, a drag-free system will be applied. This usually leads a complicated and expensive satellite system. We, however, succeeded in finding a robust control law for a simple system even under the unpredictable change of air drag. When the altitude of the satellite is lowered successfully, the spatial resolution of an optical sensor can be highly improved. If a SAR is equipped with the satellite, it enables the drastic reduction of electric power consumption and the fabulous spatial resolution improvement at the same time.

  17. Component lifetime modelling

    NARCIS (Netherlands)

    Verweij, J.F.; Verweij, J.F.; Brombacher, A.C.; Brombacher, A.C.; Lunenborg, M.M.; Lunenborg, M.M.

    1994-01-01

    There are two approaches to component lifetime modelling. The first one uses a reliability prediction method as described in the (military) handbooks with the appropriate models and parameters. The advantages are: (a) It takes into account all possible failure mechanisms. (b) It is easy to use. The

  18. Investigating the auroral electrojets with low altitude polar orbiting satellites

    DEFF Research Database (Denmark)

    Moretto, T.; Olsen, Nils; Ritter, P.

    2002-01-01

    Three geomagnetic satellite missions currently provide high precision magnetic field measurements from low altitude polar orbiting spacecraft. We demonstrate how these data can be used to determine the intensity and location of the horizontal currents that flow in the ionosphere, predominantly...... in the auroral electrojets. First, we examine the results during a recent geomagnetic storm. The currents derived from two satellites at different altitudes are in very good agreement, which verifies good stability of the method. Further, a very high degree of correlation (correlation coefficients of 0.......8-0.9) is observed between the amplitudes of the derived currents and the commonly used auroral electro-jet indices based on magnetic measurements at ground. This points to the potential of defining an auroral activity index based on the satellite observations, which could be useful for space weather monitoring...

  19. Low-altitude ion heating with downflowing and upflowing ions

    Science.gov (United States)

    Shen, Y.; Knudsen, D. J.; Burchill, J. K.; Howarth, A. D.; Yau, A. W.; James, G.; Miles, D.; Cogger, L. L.; Perry, G. W.

    2017-12-01

    Mechanisms that energize ions at the initial stage of ion upflow are still not well understood. We statistically investigate ionospheric ion energization and field-aligned motion at very low altitudes (330-730 km) using simultaneous plasma, magnetic field, wave electric field and optical data from the e-POP satellite. The high-time-resolution (10 ms) dataset enables us to study the micro-structures of ion heating and field-aligned ion motion. The ion temperature and field-aligned bulk flow velocity are derived from 2-D ion distribution functions measured by the SEI instrument. From March 2015 to March 2016, we've found 17 orbits (in total 24 ion heating periods) with clear ion heating signatures passing across the dayside cleft or the nightside auroral regions. Most of these events have consistent ion heating and flow velocity characteristics observed from both the SEI and IRM instruments. The perpendicular ion temperature goes up to 4.5 eV within a 2 km-wide region in some cases, in which the Radio Receiver Instrument (RRI) sees broadband extremely low frequency (BBELF) waves, demonstrating significant wave-ion heating down to as low as 350 km. The e-POP Fast Auroral Imager (FAI) and Magnetic Field (MGF) instruments show that many events are associated with active aurora and are within downward current regions. Contrary to what would be expected from mirror-force acceleration of heated ions, the majority of these heating events (17 out of 24) are associated with the core ion downflow rather than upflow. These statistical results provide us with new sights into ion heating and field-aligned flow processes at very low altitudes.

  20. Computer vision techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar

    1990-01-01

    Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.

  1. Developing a Model Component

    Science.gov (United States)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  2. Proton isotropy boundaries as measured on mid- and low-altitude satellites

    Directory of Open Access Journals (Sweden)

    N. Yu. Ganushkina

    2005-07-01

    Full Text Available Polar CAMMICE MICS proton pitch angle distributions with energies of 31-80 keV were analyzed to determine the locations where anisotropic pitch angle distributions (perpendicular flux dominating change to isotropic distributions. We compared the positions of these mid-altitude isotropic distribution boundaries (IDB for different activity conditions with low-altitude isotropic boundaries (IB observed by NOAA 12. Although the obtained statistical properties of IDBs were quite similar to those of IBs, a small difference in latitudes, most pronounced on the nightside and dayside, was found. We selected several events during which simultaneous observations in the same local time sector were available from Polar at mid-altitudes, and NOAA or DMSP at low-altitudes. Magnetic field mapping using the Tsyganenko T01 model with the observed solar wind input parameters showed that the low- and mid-altitude isotropization boundaries were closely located, which leads us to suggest that the Polar IDB and low-altitude IBs are related. Furthermore, we introduced a procedure to control the difference between the observed and model magnetic field to reduce the large scatter in the mapping. We showed that the isotropic distribution boundary (IDB lies in the region where Rc/ρ~6, that is at the boundary of the region where the non-adiabatic pitch angle scattering is strong enough. We therefore conclude that the scattering in the large field line curvature regions in the nightside current sheet is the main mechanism producing isotropization for the main portion of proton population in the tail current sheet. This mechanism controls the observed positions of both IB and IDB boundaries. Thus, this tail region can be probed, in its turn, with observations of these isotropy boundaries. Keywords. Magnetospheric physics (Energetic particles, Precipitating; Magnetospheric configuration and dynamics; Magnetotail

  3. Plasma waves observed at low altitudes in the tenuous Venus nightside ionosphere

    Science.gov (United States)

    Strangeway, R. J.; Russell, C. T.; Ho, C. M.; Brace, L. H.

    1993-12-01

    The Pioneer Venus (PV) Orbiter Electric Field Detector (OEFD) measured many plasma wave bursts throughout the low altitude ionosphere during the final entry phase of the spacecraft. Apart from 100 Hz bursts observed at very low altitudes (approx. 130 km), the bursts fall into two classes. The first of these is a wideband signal that is observed in regions of low magnetic field, but average densities, in comparison to the prevailing ionospheric condition. This wideband signal is not observed in the 30 kHz channel of the OEFD, but is restricted to the 5.4 kHz channel and lower. Since these bursts are observed with roughly constant burst rate above 160 km altitude, we attribute them to ion acoustic mode waves generated by precipitating solar wind electrons. The second type of signal is restricted to 100 Hz only, and is observed in the regions of low electron beta, consistent with whistler-mode waves. These waves could be generated by lightning in the Venus atmosphere if the vertical component of the magnetic field greater than 3.6 nT. Because the ionosphere is very different during the entry phase, compared to the ionosphere as observed early in the Pioneer Venus mission, any conclusions regarding the source of the plasma waves detected during entry phase cannot be applied directly to the earlier observations.

  4. UAV Low Altitude Photogrammetry for Power Line Inspection

    Directory of Open Access Journals (Sweden)

    Yong Zhang

    2017-01-01

    Full Text Available When the distance between an obstacle and a power line is less than the discharge distance, a discharge arc can be generated, resulting in the interruption of power supplies. Therefore, regular safety inspections are necessary to ensure the safe operation of power grids. Tall vegetation and buildings are the key factors threatening the safe operation of extra high voltage transmission lines within a power line corridor. Manual or laser intensity direction and ranging (LiDAR based inspections are time consuming and expensive. To make safety inspections more efficient and flexible, a low-altitude unmanned aerial vehicle (UAV remote-sensing platform, equipped with an optical digital camera, was used to inspect power line corridors. We propose a semi-patch matching algorithm based on epipolar constraints, using both the correlation coefficient (CC and the shape of its curve to extract three dimensional (3D point clouds for a power line corridor. We use a stereo image pair from inter-strip to improve power line measurement accuracy by transforming the power line direction to an approximately perpendicular to epipolar line. The distance between the power lines and the 3D point cloud is taken as a criterion for locating obstacles within the power line corridor automatically. Experimental results show that our proposed method is a reliable, cost effective, and applicable way for practical power line inspection and can locate obstacles within the power line corridor with accuracy better than ±0.5 m.

  5. Identification and observations of the plasma mantle at low altitude

    International Nuclear Information System (INIS)

    Newell, P.T.; Meng, Ching-I.; Sanchez, E.R.; Burke, W.J.; Greenspan, M.E.

    1991-01-01

    The direct injection of magnetosheath plasma into the cusp produces at low altitude a precipitation regime with an energy-latitude dispersion-the more poleward portion of which the authors herein term the cusp plume. An extensive survey of the Defense Meteorological Satellite Program (DMSP) F7 and F9 32 eV to 30 keV precipitating particle data shows that similar dispersive signatures exist over much of the dayside, just poleward of the auroral oval. Away from noon (or more precisely, anywhere not immediately poleward of the cusp) the fluxes are reduced by a factor of about 10 as compared to the cusp plume, but other characteristics are quite similar. For example, the inferred temperatures and flow velocities, and the characteristic decline of energy and number flux with increasing latitude is essentially the same in a longitudinally broad ring of precipitation a few degrees thick in latitude over much of the dayside. They conclude that the field lines on which such precipitation occurs thread the magnetospheric plasma mantle over the entire longitudinally extended ring. Besides the location of occurence (i.e., immediately poleward of the dayside oval), the identification is based especially on the associated very soft ion spectra, which have densities from a few times 10 -2 to a few times 10 -1 /cm 3 ; on the temperature range, which is form from a few tens of eV up to about 200 eV; amd on the characteristic gradients with latitude. Further corroborating evidence that the precipitation is associated with field lines which thread the plasma mantle includes drift meter observations which show that regions so identified based on the particle data consistently lie on antisunward convecting field lines. The observations indicate that some dayside high-latitude auroral features just poleward of the auroral oval are embedded in the plasma mantle

  6. WETLAND VEGETATION INTEGRITY ASSESSMENT WITH LOW ALTITUDE MULTISPECTRAL UAV IMAGERY

    Directory of Open Access Journals (Sweden)

    M. A. Boon

    2017-08-01

    Full Text Available The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera position’s and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetland’s structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified (“D” PES Category and that the

  7. Wetland Vegetation Integrity Assessment with Low Altitude Multispectral Uav Imagery

    Science.gov (United States)

    Boon, M. A.; Tesfamichael, S.

    2017-08-01

    The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera position's and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetland's structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified ("D" PES Category) and that the condition is expected to

  8. Molecular Models Candy Components

    Science.gov (United States)

    Coleman, William F.

    2007-01-01

    An explanation of various principles of chemistry in a paper by Fanny Ennever by the use of candy is described. The paper explains components of sucrose and the invert sugar that results from the hydrolysis of sucrose and will help students in determining whether the products are indeed hydrates of carbon.

  9. Easy-to-Use UAV Ground Station Software for Low-Altitude Civil Operations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design and develop easy-to-use Ground Control Station (GCS) software for low-altitude civil Unmanned Aerial Vehicle (UAV) operations. The GCS software...

  10. Easy-to-Use UAV Ground Station Software for Low-Altitude Civil Operations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design and develop easy-to-use Ground Control Station (GCS) software for low-altitude civil Unmanned Aerial Vehicle (UAV) operations. The GCS software...

  11. Precision Agriculture: Using Low-Cost Systems to Acquire Low-Altitude Images.

    Science.gov (United States)

    Ponti, Moacir; Chaves, Arthur A; Jorge, Fabio R; Costa, Gabriel B P; Colturato, Adimara; Branco, Kalinka R L J C

    2016-01-01

    Low cost remote sensing imagery has the potential to make precision farming feasible in developing countries. In this article, the authors describe image acquisition from eucalyptus, bean, and sugarcane crops acquired by low-cost and low-altitude systems. They use different approaches to handle low-altitude images in both the RGB and NIR (near-infrared) bands to estimate and quantify plantation areas.

  12. Lagrangian analysis of low altitude anthropogenic plume processing across the North Atlantic

    Directory of Open Access Journals (Sweden)

    E. Real

    2008-12-01

    Full Text Available The photochemical evolution of an anthropogenic plume from the New-York/Boston region during its transport at low altitudes over the North Atlantic to the European west coast has been studied using a Lagrangian framework. This plume, originally strongly polluted, was sampled by research aircraft just off the North American east coast on 3 successive days, and then 3 days downwind off the west coast of Ireland where another aircraft re-sampled a weakly polluted plume. Changes in trace gas concentrations during transport are reproduced using a photochemical trajectory model including deposition and mixing effects. Chemical and wet deposition processing dominated the evolution of all pollutants in the plume. The mean net photochemical O3 production is estimated to be −5 ppbv/day leading to low O3 by the time the plume reached Europe. Model runs with no wet deposition of HNO3 predicted much lower average net destruction of −1 ppbv/day O3, arising from increased levels of NOx via photolysis of HNO3. This indicates that wet deposition of HNO3 is indirectly responsible for 80% of the net destruction of ozone during plume transport. If the plume had not encountered precipitation, it would have reached Europe with O3 concentrations of up to 80 to 90 ppbv and CO between 120 and 140 ppbv. Photochemical destruction also played a more important role than mixing in the evolution of plume CO due to high levels of O3 and water vapour showing that CO cannot always be used as a tracer for polluted air masses, especially in plumes transported at low altitudes. The results also show that, in this case, an increase in O3/CO slopes can be attributed to photochemical destruction of CO and not to photochemical O3 production as is often assumed.

  13. Differences in Hematological Traits between High- and Low-Altitude Lizards (Genus Phrynocephalus.

    Directory of Open Access Journals (Sweden)

    Songsong Lu

    Full Text Available Phrynocephalus erythrurus (Lacertilia: Agamidae is considered to be the highest living reptile in the world (about 4500-5000 m above sea level, whereas Phrynocephalus przewalskii inhabits low altitudes (about 1000-1500 m above sea level. Here, we report the differences in hematological traits between these two different Phrynocephalus species. Compared with P. przewalskii, the results indicated that P. erythrurus own higher oxygen carrying capacity by increasing red blood cell count (RBC, hemoglobin concentration ([Hb] and hematocrit (Hct and these elevations could promote oxygen carrying capacity without disadvantage of high viscosity. The lower partial pressure of oxygen in arterial blood (PaO2 of P. erythrurus did not cause the secondary alkalosis, which may be attributed to an efficient pulmonary system for oxygen (O2 loading. The elevated blood-O2 affinity in P. erythrurus may be achieved by increasing intrinsic O2 affinity of isoHbs and balancing the independent effects of potential heterotropic ligands. We detected one α-globin gene and three β-globin genes with 1 and 33 amino acid substitutions between these two species, respectively. Molecular dynamics simulation results showed that amino acids substitutions in β-globin chains could lead to the elimination of hydrogen bonds in T-state Hb models of P. erythrurus. Based on the present data, we suggest that P. erythrurus have evolved an efficient oxygen transport system under the unremitting hypobaric hypoxia.

  14. Monitoring beach evolution using low-altitude aerial photogrammetry and UAV drones

    Science.gov (United States)

    Rovere, Alessio; Casella, Elisa; Vacchi, Matteo; Mucerino, Luigi; Pedroncini, Andrea; Ferrari, Marco; Firpo, Marco

    2014-05-01

    Beach monitoring is essential in order to understand the mechanisms of evolution of soft coasts, and the rates of erosion. Traditional beach monitoring techniques involve topographic and bathymetric surveys of the beach, and/or aerial photos repeated in time and compared through geographical information systems. A major problem of this kind of approach is the high economic cost. This often leads to increase the time lag between successive monitoring campaigns to reduce survey costs, with the consequence of fragmenting the information available for coastal zone management. MIRAMar is a project funded by Regione Liguria through the PO CRO European Social Fund, and has two main objectives: i) to study and develop an innovative technique, relatively low-cost, to monitor the evolution of the shoreline using low-altitude Unmanned Aerial Vehicle (UAV) photogrammetry; ii) to study the impact of different type of storm events on a vulnerable coastal tract subject to coastal erosion using also the data collected by the UAV instrument. To achieve these aims we use a drone with its hardware and software suit, traditional survey techniques (bathymetric surveys, topographic GPS surveys and GIS techniques) and we implement a numerical modeling chain (coupling hydrodynamic, wave and sand transport modules) in order to study the impact of different type of storm events on a vulnerable coastal tract subject to coastal erosion.

  15. Differences in Hematological Traits between High- and Low-Altitude Lizards (Genus Phrynocephalus).

    Science.gov (United States)

    Lu, Songsong; Xin, Ying; Tang, Xiaolong; Yue, Feng; Wang, Huihui; Bai, Yucheng; Niu, Yonggang; Chen, Qiang

    2015-01-01

    Phrynocephalus erythrurus (Lacertilia: Agamidae) is considered to be the highest living reptile in the world (about 4500-5000 m above sea level), whereas Phrynocephalus przewalskii inhabits low altitudes (about 1000-1500 m above sea level). Here, we report the differences in hematological traits between these two different Phrynocephalus species. Compared with P. przewalskii, the results indicated that P. erythrurus own higher oxygen carrying capacity by increasing red blood cell count (RBC), hemoglobin concentration ([Hb]) and hematocrit (Hct) and these elevations could promote oxygen carrying capacity without disadvantage of high viscosity. The lower partial pressure of oxygen in arterial blood (PaO2) of P. erythrurus did not cause the secondary alkalosis, which may be attributed to an efficient pulmonary system for oxygen (O2) loading. The elevated blood-O2 affinity in P. erythrurus may be achieved by increasing intrinsic O2 affinity of isoHbs and balancing the independent effects of potential heterotropic ligands. We detected one α-globin gene and three β-globin genes with 1 and 33 amino acid substitutions between these two species, respectively. Molecular dynamics simulation results showed that amino acids substitutions in β-globin chains could lead to the elimination of hydrogen bonds in T-state Hb models of P. erythrurus. Based on the present data, we suggest that P. erythrurus have evolved an efficient oxygen transport system under the unremitting hypobaric hypoxia.

  16. Development and application of procedures to evaluate air quality and visibility impacts of low-altitude flying operations

    Energy Technology Data Exchange (ETDEWEB)

    Liebsch, E.J.

    1990-08-01

    This report describes the development and application of procedures to evaluate the effects of low-altitude aircraft flights on air quality and visibility. The work summarized in this report was undertaken as part of the larger task of assessing the various potential environmental impacts associated with low-altitude military airspaces. Accomplishing the air quality/visibility analysis for the GEIS included (1) development and application of an integrated air quality model and aircraft emissions database specifically for Military Training Route (MTR) or similar flight operations, (2) selection and application of an existing air quality model to analyze the more widespread and less concentrated aircraft emissions from military Operations Areas (MOAs) and Restricted Areas (RAs), and (3) development and application of procedures to assess impacts of aircraft emissions on visibility. Existing air quality models were considered to be inadequate for predicting ground-level concentrations of pollutants emitted by aircraft along MTRs; therefore, the Single-Aircraft Instantaneous Line Source (SAILS) and Multiple-Aircraft Instantaneous Line Source (MAILS) models were developed to estimate potential impacts along MTRs. Furthermore, a protocol was developed and then applied in the field to determine the degree of visibility impairment caused by aircraft engine exhaust plumes. 19 refs., 2 figs., 3 tabs.

  17. Living High and Feeling Low: Altitude, Suicide, and Depression.

    Science.gov (United States)

    Kious, Brent M; Kondo, Douglas G; Renshaw, Perry F

    After participating in this activity, learners should be better able to:• Assess epidemiologic evidence that increased altitude of residence is linked to increased risk of depression and suicide• Evaluate strategies to address hypoxia-related depression and suicidal ideation ABSTRACT: Suicide and major depressive disorder (MDD) are complex conditions that almost certainly arise from the influences of many interrelated factors. There are significant regional variations in the rates of MDD and suicide in the United States, suggesting that sociodemographic and environmental conditions contribute. Here, we review epidemiological evidence that increases in the altitude of residence are linked to the increased risk of depression and suicide. We consider the possibility that chronic hypobaric hypoxia (low blood oxygen related to low atmospheric pressure) contributes to suicide and depression, which is suggested by animal models, short-term studies in humans, and the effects of hypoxic medical conditions on suicide and depression. We argue that hypobaric hypoxia could promote suicide and depression by altering serotonin metabolism and brain bioenergetics; both of these pathways are implicated in depression, and both are affected by hypoxia. Finally, we briefly examine treatment strategies to address hypoxia-related depression and suicidal ideation that are suggested by these findings, including creatine monohydrate and the serotonin precursors tryptophan and 5-hydroxytryptophan.

  18. Real-Time Autonomous Obstacle Avoidance for Low-Altitude Fixed-Wing Aircraft

    Science.gov (United States)

    Owlia, Shahboddin

    The GeoSurv II is an Unmanned Aerial Vehicle (UAV) being developed by Carleton University and Sander Geophysics. This thesis is in support of the GeoSurv II project. The objective of the GeoSurv II project is to create a fully autonomous UAV capable of performing geophysical surveys. In order to achieve this level of autonomy, the UAV, which due to the nature of its surveys flies at low altitude, must be able to avoid potential obstacles such as trees, powerlines, telecommunication towers, etc. Developing a method to avoid these obstacles is the objective of this thesis. The literature is rich in methods for trajectory planning and mid-air collision avoidance with other aircraft. In contrast, in this thesis, a method for avoiding static obstacles that are not known a priori is developed. The potential flow theory and panel method are borrowed from fluid mechanics and are employed to generate evasive maneuvers when obstacles are encountered. By means of appropriate modelling of obstacles, the aircraft's constraints are taken into account such that the evasive maneuvers are feasible for the UAV. Moreover, the method is developed with consideration of the limitations of obstacle detection in GeoSurv II. Due to the unavailability of the GeoSurv II aircraft, and the lack of a complete model for GeoSurv II, the method developed is implemented on the non-linear model of the Aerosonde UAV. The Aerosonde model is then subjected to various obstacle scenarios and it is seen that the UAV successfully avoids the obstacles.

  19. 75 FR 6319 - Proposed Amendment of Low Altitude Area Navigation Route T-254; Houston, TX

    Science.gov (United States)

    2010-02-09

    ... Amendment of Low Altitude Area Navigation Route T-254; Houston, TX AGENCY: Federal Aviation Administration... altitude Area Navigation (RNAV) route T-254 in the Houston, TX, terminal area by eliminating the segment... safety and the efficient use of the navigable airspace in the Houston, TX, terminal area. DATES: Comments...

  20. 75 FR 16336 - Establishment of Low Altitude Area Navigation Route (T-284); Houston, TX

    Science.gov (United States)

    2010-04-01

    ... (T-284); Houston, TX AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final rule. SUMMARY: This action establishes a low altitude area navigation (RNAV) route, designated T-284, in the Houston... navigable airspace in the Houston, TX, terminal area. DATES: Effective date 0901 UTC, July 29, 2010. The...

  1. 75 FR 18047 - Amendment of Low Altitude Area Navigation Route T-254; Houston, TX

    Science.gov (United States)

    2010-04-09

    ...-254; Houston, TX AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final rule. SUMMARY: This action amends low altitude Area Navigation (RNAV) route T-254 in the Houston, TX, terminal area by... Houston, TX, terminal area. DATES: Effective Dates: 0901 UTC, June 3, 2010. The Director of the Federal...

  2. Effects of repetitive training at low altitude on erythropoiesis in 400 and 800 m runners.

    Science.gov (United States)

    Frese, F; Friedmann-Bette, B

    2010-06-01

    Classical altitude training can cause an increase in total hemoglobin mass (THM) if a minimum "dose of hypoxia" is reached (altitude >or=2,000 m, >or=3 weeks). We wanted to find out if repetitive exposure to mild hypoxia during living and training at low altitude (training camps at low altitude interspersed by 3 weeks of sea-level training and at the same time points in a control group (CG) of 5 well-trained runners. EPO, sTfR and ferritin were also repeatedly measured during the altitude training camps. Repeated measures ANOVA revealed significant increases in EPO- and sTfR-levels during both training camps and a significant decrease in ferritin indicating enhanced erythropoietic stimulation during living and training at low altitude. Furthermore, significant augmentation of THM by 5.1% occurred in the course of the 2 altitude training camps. In conclusion, repetitive living and training at low altitude leads to a hypoxia-induced increase in erythropoietic stimulation in elite 400 m and 800 m runners and, apparently, might also cause a consecutive augmentation of THM.

  3. Unmanned aerial vehicle: A unique platform for low-altitude remote sensing for crop management

    Science.gov (United States)

    Unmanned aerial vehicles (UAV) provide a unique platform for remote sensing to monitor crop fields that complements remote sensing from satellite, aircraft and ground-based platforms. The UAV-based remote sensing is versatile at ultra-low altitude to be able to provide an ultra-high-resolution imag...

  4. Unmanned Aerial Systems Traffic Management (UTM): Safely Enabling UAS Operations in Low-Altitude Airspace

    Science.gov (United States)

    Rios, Joseph

    2016-01-01

    Currently, there is no established infrastructure to enable and safely manage the widespread use of low-altitude airspace and UAS flight operations. Given this, and understanding that the FAA faces a mandate to modernize the present air traffic management system through computer automation and significantly reduce the number of air traffic controllers by FY 2020, the FAA maintains that a comprehensive, yet fully automated UAS traffic management (UTM) system for low-altitude airspace is needed. The concept of UTM is to begin by leveraging concepts from the system of roads, lanes, stop signs, rules and lights that govern vehicles on the ground today. Building on its legacy of work in air traffic management (ATM), NASA is working with industry to develop prototype technologies for a UAS Traffic Management (UTM) system that would evolve airspace integration procedures for enabling safe, efficient low-altitude flight operations that autonomously manage UAS operating in an approved low-altitude airspace environment. UTM is a cloud-based system that will autonomously manage all traffic at low altitudes to include UASs being operated beyond visual line of sight of an operator. UTM would thus enable safe and efficient flight operations by providing fully integrated traffic management services such as airspace design, corridors, dynamic geofencing, severe weather and wind avoidance, congestion management, terrain avoidance, route planning re-routing, separation management, sequencing spacing, and contingency management. UTM removes the need for human operators to continuously monitor aircraft operating in approved areas. NASA envisions concepts for two types of UTM systems. The first would be a small portable system, which could be moved between geographical areas in support of operations such as precision agriculture and public safety. The second would be a Persistent system, which would support low-altitude operations in an approved area by providing continuous automated

  5. Image Positioning Accuracy Analysis for Super Low Altitude Remote Sensing Satellites

    Directory of Open Access Journals (Sweden)

    Ming Xu

    2012-10-01

    Full Text Available Super low altitude remote sensing satellites maintain lower flight altitudes by means of ion propulsion in order to improve image resolution and positioning accuracy. The use of engineering data in design for achieving image positioning accuracy is discussed in this paper based on the principles of the photogrammetry theory. The exact line-of-sight rebuilding of each detection element and this direction precisely intersecting with the Earth's elliptical when the camera on the satellite is imaging are both ensured by the combined design of key parameters. These parameters include: orbit determination accuracy, attitude determination accuracy, camera exposure time, accurately synchronizing the reception of ephemeris with attitude data, geometric calibration and precise orbit verification. Precise simulation calculations show that image positioning accuracy of super low altitude remote sensing satellites is not obviously improved. The attitude determination error of a satellite still restricts its positioning accuracy.

  6. Efficient content-based low-altitude images correlated network and strips reconstruction

    Science.gov (United States)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  7. Long term observation of low altitude atmosphere by high precision polarization lidar

    Science.gov (United States)

    Shiina, Tatsuo; Noguchi, Kazuo; Fukuchi, Tetsuo

    2011-11-01

    Prediction of weather disaster such as heavy rain and light strike is an earnest desire. Successive monitoring of the low altitude atmosphere is important to predict it. The weather disaster often befalls with a steep change in a local area. It is hard for usual meteorological equipments to capture and alert it speedily. We have been developed the near range lidar to capture and analyze the low altitude atmosphere. In this study, high precision polarization lidar was developed to observe the low altitude atmosphere. This lidar has the high extinction ratio of polarization of >30dB to detect the small polarization change of the atmosphere. The change of the polarization in the atmosphere leads to the detection of the depolarization effect and the Faraday effect, which are caused by ice-crystals and lightning discharge, respectively. As the lidar optics is "inline" type, which means common use of optics for transmitter and receiver, it can observe the near range echo with the narrow field of view. The long-term observation was accomplished at low elevation angle. It aims to monitor the low altitude atmosphere under the cloud base and capture its spatial distribution and convection process. In the viewpoint of polarization, the ice-crystals' flow and concentration change of the aerosols are monitored. The observation has been continued in the cloudy and rainy days. The thunder cloud is also a target. In this report, the system specification is explained to clear the potential and the aims. The several observation data including the long-term observation will be shown with the consideration of polarization analysis.

  8. Oblique low-altitude image matching using robust perspective invariant features

    Science.gov (United States)

    He, Haiqing; Du, Jing; Chen, Xiaoyong; Wang, Yuqian

    2017-01-01

    Compared with vertical photogrammtry, oblique photogrammetry is radically different for images acquired from sensor with big yaw, pitch, and roll angles. Image matching is a vital step and core problem of oblique low-altitude photogrammetric process. Among the most popular oblique images matching methods are currently SIFT/ASIFT and many affine invariant feature-based approaches, which are mainly used in computer vision, while these methods are unsuitable for requiring evenly distributed corresponding points and high efficiency simultaneously in oblique photogrammetry. In this paper, we present an oblique low-altitude images matching approach using robust perspective invariant features. Firstly, the homography matrix is estimated by a few corresponding points obtained from top pyramid images matching in several projective simulation. Then images matching are implemented by sub-pixel Harris corners and descriptors after shape perspective transforming on the basis of homography matrix. Finally, the error or gross error matched points are excluded by epipolar geometry, RANSAC algorithm and back projection constraint. Experimental results show that the proposed approach can achieve more excellent performances in oblique low-altitude images matching than the common methods, including SIFT and SURF. And the proposed approach can significantly improve the computational efficiency compared with ASIFT and Affine-SURF.

  9. Unmanned Aircraft Systems Traffic Management (UTM) Safely Enabling UAS Operations in Low-Altitude Airspace

    Science.gov (United States)

    Kopardekar, Parimal H.

    2016-01-01

    Unmanned Aircraft System (UAS) Traffic Management (UTM) Enabling Civilian Low-Altitude Airspace and Unmanned Aircraft System Operations What is the problem? Many beneficial civilian applications of UAS have been proposed, from goods delivery and infrastructure surveillance, to search and rescue, and agricultural monitoring. Currently, there is no established infrastructure to enable and safely manage the widespread use of low-altitude airspace and UAS operations, regardless of the type of UAS. A UAS traffic management (UTM) system for low-altitude airspace may be needed, perhaps leveraging concepts from the system of roads, lanes, stop signs, rules and lights that govern vehicles on the ground today, whether the vehicles are driven by humans or are automated. What system technologies is NASA exploring? Building on its legacy of work in air traffic management for crewed aircraft, NASA is researching prototype technologies for a UAS Traffic Management (UTM) system that could develop airspace integration requirements for enabling safe, efficient low-altitude operations. While incorporating lessons learned from the today's well-established air traffic management system, which was a response that grew out of a mid-air collision over the Grand Canyon in the early days of commercial aviation, the UTM system would enable safe and efficient low-altitude airspace operations by providing services such as airspace design, corridors, dynamic geofencing, severe weather and wind avoidance, congestion management, terrain avoidance, route planning and re-routing, separation management, sequencing and spacing, and contingency management. One of the attributes of the UTM system is that it would not require human operators to monitor every vehicle continuously. The system could provide to human managers the data to make strategic decisions related to initiation, continuation, and termination of airspace operations. This approach would ensure that only authenticated UAS could operate

  10. Component Composition Using Feature Models

    DEFF Research Database (Denmark)

    Eichberg, Michael; Klose, Karl; Mitschke, Ralf

    2010-01-01

    In general, components provide and require services and two components are bound if the first component provides a service required by the second component. However, certain variability in services - w.r.t. how and which functionality is provided or required - cannot be described using standard i...

  11. Massive photometry of low-altitude artificial satellites on Mini-Mega-TORTORA

    Science.gov (United States)

    Karpov, S.; Katkova, E.; Beskin, G.; Biryukov, A.; Bondar, S.; Davydov, E.; Ivanov, E.; Perkov, A.; Sasyuk, V.

    2016-12-01

    The nine-channel Mini-Mega-TORTORA (MMT-9) optical wide-field monitoring system with high temporal resolution system is in operation since June 2014. The system has 0.1 s temporal resolution and effective detection limit around 10 mag (calibrated to V filter) for fast-moving objects on this timescale. In addition to its primary scientific operation, the system detects 200-500 tracks of satellites every night, both on low-altitude and high ellipticity orbits. Using these data we created and support the public database of photometric characteristics for these satellites, available online.

  12. Safely Enabling Civilian Unmanned Aerial System (UAS) Operations in Low-Altitude Airspace by Unmanned Aerial System Traffic Management (UTM)

    Science.gov (United States)

    Kopardekar, Parimal Hemchandra

    2015-01-01

    Many UAS will operate at lower altitude (Class G, below 2000 feet). There is an urgent need for a system for civilian low-altitude airspace and UAS operations. Stakeholders want to work with NASA to enable safe operations.

  13. Stochastic Modeling Of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2014-01-01

    reliable components are needed for wind turbine. In this paper focus is on reliability of critical components in drivetrain such as bearings and shafts. High failure rates of these components imply a need for more reliable components. To estimate the reliability of these components, stochastic models...

  14. Interaction between the low altitude atmosphere and clouds by high-precision polarization lidar

    Science.gov (United States)

    Shiina, Tatsuo; Noguchi, Kazuo; Fukuchi, Tetsuo

    2012-11-01

    Lidar is a powerful remote sensing tool to monitor the weather changes and the environmental issues. This technique should not been restricted in those fields. In this study, the authors aim to be apply it to the prediction of weather disaster. The heavy rain and the lightning strike are our targets. The inline typed MPL (micro pulse lidar) has been accomplished to grasp the interaction between the low altitude cloud and the atmosphere and to predict the heavy rain, while it was hard to catch the sign of lightning strike. The authors introduced a new algorism to catch the direct sign of the lightning strike. Faraday effect is caused by lightning discharge in the ionized atmosphere. This effect interacts with the polarization of the propagating beam, that is, the polarization plane is rotated by the effect. In this study, high precision polarization lidar was developed to grasp the small rotation angle of the polarization of the propagating beam. In this report, the interaction between the low altitude cloud and the atmosphere was monitored by the high precision polarization lidar. And the observation result of the lightning discharge were analyzed.

  15. Low altitude observations of the energetic electrons in the outer radiation belt during isolated substorms

    International Nuclear Information System (INIS)

    Varga, L.; Venkatesan, D.; Johns Hopkins Univ., Laurel, MD; Meng, C.I.

    1985-01-01

    The low energy (1-20 keV) detector registering particles onboard the polar-orbiting low altitude (approx. 850 km) DMSP-F2 and -F3 satellites also records high energy electrons penetrating the detector walls. Thus the dynamics of this electron population at L=3.5 can be studied during isolated periods of magnetospheric substorms identified by the indices of auroral electrojet (AE), geomagnetic (Ksub(p)) and ring current (Dsub(st)). Temporal changes in the electron flux during the substorms are observed to be an additional contribution riding over the top of the pre-storm (or geomagnetically quiet-time) electron population; the duration of the interval of intensity variations is observed to be about the same as that of the enhancement of the AE index. This indicates the temporal response of the outer radiation belt to the substorm activity, since the observation was made in the ''horns'' of the outer radiation belt. The observed enhanced radiation at low altitude may associate with the instantaneous increase and/or dumping of the outer radiation belt energetic electrons during each isolated substorm activity. (author)

  16. Normobaric Hypoxia Exposure during Low Altitude Stay and Performance of Elite-Level Race-Walkers

    Directory of Open Access Journals (Sweden)

    Gaurav Sikri, AB Srinivasa

    2016-06-01

    Full Text Available We read with profound interest the article titled ‘Increased hypoxic dose after training at low altitude with 9h per night at 3000m normobaric hypoxia’ by Carr et al. (2015. Authors have concluded that low altitude (1380 m combined with normobaric hypoxia of 3000 m improves total haemoglobin mass (Hbmass and is an effective alternate method for training. Like other studies on elite athletes, the authors of present work have brought out that a major limitation was non-availability of a control group consisting of subjects undertaking same supervised training at normoxia. The total number of ‘possible’ subjects for control group which were taken from a previous study (Saunders et al., 2010 was 11 i.e placebo group (n = 6; 3 male and 3 female and Nocebo group (n = 5; 3 female and 2 male. It seems likely that authors of the present study have chosen only 10 subjects out of those 11. The criteria for exclusion of one subject and selection of 10 out of 11 subjects from the previous study to form the control group of the present study may require further elaboration.

  17. Adaptive clutter rejection filters for airborne Doppler weather radar applied to the detection of low altitude windshear

    Science.gov (United States)

    Keel, Byron M.

    1989-01-01

    An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.

  18. Unmanned Aerial System (UAS) Traffic Management (UTM): Enabling Low-Altitude Airspace and UAS Operations

    Science.gov (United States)

    Kopardekar, Parimal H.

    2014-01-01

    Many civilian applications of Unmanned Aerial Systems (UAS) have been imagined ranging from remote to congested urban areas, including goods delivery, infrastructure surveillance, agricultural support, and medical services delivery. Further, these UAS will have different equipage and capabilities based on considerations such as affordability, and mission needs applications. Such heterogeneous UAS mix, along with operations such as general aviation, helicopters, gliders must be safely accommodated at lower altitudes. However, key infrastructure to enable and safely manage widespread use of low-altitude airspace and UAS operations therein does not exist. Therefore, NASA is exploring functional design, concept and technology development, and a prototype UAS Traffic Management (UTM) system. UTM will support safe and efficient UAS operations for the delivery of goods and services

  19. High-resolution Ceres Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images

    Science.gov (United States)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2017-06-01

    The Dawn spacecraft Framing Camera (FC) acquired over 31,300 clear filter images of Ceres with a resolution of about 35 m/pxl during the eleven cycles in the Low Altitude Mapping Orbit (LAMO) phase between December 16 2015 and August 8 2016. We ortho-rectified the images from the first four cycles and produced a global, high-resolution, uncontrolled photomosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 62 tiles mapped at a scale of 1:250,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The full atlas is available to the public through the Dawn Geographical Information System (GIS) web page [http://dawngis.dlr.de/atlas] and will become available through the NASA Planetary Data System (PDS) (http://pdssbn.astro.umd.edu/).

  20. Understanding symmetrical components for power system modeling

    CERN Document Server

    Das, J C

    2017-01-01

    This book utilizes symmetrical components for analyzing unbalanced three-phase electrical systems, by applying single-phase analysis tools. The author covers two approaches for studying symmetrical components; the physical approach, avoiding many mathematical matrix algebra equations, and a mathematical approach, using matrix theory. Divided into seven sections, topics include: symmetrical components using matrix methods, fundamental concepts of symmetrical components, symmetrical components –transmission lines and cables, sequence components of rotating equipment and static load, three-phase models of transformers and conductors, unsymmetrical fault calculations, and some limitations of symmetrical components.

  1. Modeling the degradation of nuclear components

    International Nuclear Information System (INIS)

    Stock, D.; Samanta, P.; Vesely, W.

    1993-01-01

    This paper describes component level reliability models that use information on degradation to predict component reliability, and which have been used to evaluate different maintenance and testing policies. The models are based on continuous time Markov processes, and are a generalization of reliability models currently used in Probabilistic Risk Assessment. An explanation of the models, the model parameters, and an example of how these models can be used to evaluate maintenance policies are discussed

  2. An X-Band Radar Terrain Feature Detection Method for Low-Altitude SVS Operations and Calibration Using LiDAR

    Science.gov (United States)

    Young, Steve; UijtdeHaag, Maarten; Campbell, Jacob

    2004-01-01

    To enable safe use of Synthetic Vision Systems at low altitudes, real-time range-to-terrain measurements may be required to ensure the integrity of terrain models stored in the system. This paper reviews and extends previous work describing the application of x-band radar to terrain model integrity monitoring. A method of terrain feature extraction and a transformation of the features to a common reference domain are proposed. Expected error distributions for the extracted features are required to establish appropriate thresholds whereby a consistency-checking function can trigger an alert. A calibration-based approach is presented that can be used to obtain these distributions. To verify the approach, NASA's DC-8 airborne science platform was used to collect data from two mapping sensors. An Airborne Laser Terrain Mapping (ALTM) sensor was installed in the cargo bay of the DC-8. After processing, the ALTM produced a reference terrain model with a vertical accuracy of less than one meter. Also installed was a commercial-off-the-shelf x-band radar in the nose radome of the DC-8. Although primarily designed to measure precipitation, the radar also provides estimates of terrain reflectivity at low altitudes. Using the ALTM data as the reference, errors in features extracted from the radar are estimated. A method to estimate errors in features extracted from the terrain model is also presented.

  3. Liquefaction of wood and its model components

    NARCIS (Netherlands)

    Barnés, M. Castellví; de Visser, M. M.; van Rossum, G.; Kersten, S. R.A.; Lange, J. P.

    2017-01-01

    Pinewood and various model components were liquefied to bio-oil at 300–310 °C in 1-methylnaphthalene to study the chemistry of the liquefaction process. Cellulose, amylopectin and organosolv lignin were used as model components for the cellulose, hemicellulose and lignin parts of the wood.

  4. Detection of laurel wilt disease in avocado using low altitude aerial imaging.

    Directory of Open Access Journals (Sweden)

    Ana I de Castro

    Full Text Available Laurel wilt is a lethal disease of plants in the Lauraceae plant family, including avocado (Persea americana. This devastating disease has spread rapidly along the southeastern seaboard of the United States and has begun to affect commercial avocado production in Florida. The main objective of this study was to evaluate the potential to discriminate laurel wilt-affected avocado trees using aerial images taken with a modified camera during helicopter surveys at low-altitude in the commercial avocado production area. The ability to distinguish laurel wilt-affected trees from other factors that produce similar external symptoms was also studied. RmodGB digital values of healthy trees and laurel wilt-affected trees, as well as fruit stress and vines covering trees were used to calculate several vegetation indices (VIs, band ratios, and VI combinations. These indices were subjected to analysis of variance (ANOVA and an M-statistic was performed in order to quantify the separability of those classes. Significant differences in spectral values among laurel wilt affected and healthy trees were observed in all vegetation indices calculated, although the best results were achieved with Excess Red (ExR, (Red-Green and Combination 1 (COMB1 in all locations. B/G showed a very good potential for separate the other factors with symptoms similar to laurel wilt-affected trees, such as fruit stress and vines covering trees, from laurel wilt-affected trees. These consistent results prove the usefulness of using a modified camera (RmodGB to discriminate laurel wilt-affected avocado trees from healthy trees, as well as from other factors that cause the same symptoms and suggest performing the classification in further research. According to our results, ExR and B/G should be utilized to develop an algorithm or decision rules to classify aerial images, since they showed the highest capacity to discriminate laurel wilt-affected trees. This methodology may allow the

  5. Detection of laurel wilt disease in avocado using low altitude aerial imaging.

    Science.gov (United States)

    de Castro, Ana I; Ehsani, Reza; Ploetz, Randy C; Crane, Jonathan H; Buchanon, Sherrie

    2015-01-01

    Laurel wilt is a lethal disease of plants in the Lauraceae plant family, including avocado (Persea americana). This devastating disease has spread rapidly along the southeastern seaboard of the United States and has begun to affect commercial avocado production in Florida. The main objective of this study was to evaluate the potential to discriminate laurel wilt-affected avocado trees using aerial images taken with a modified camera during helicopter surveys at low-altitude in the commercial avocado production area. The ability to distinguish laurel wilt-affected trees from other factors that produce similar external symptoms was also studied. RmodGB digital values of healthy trees and laurel wilt-affected trees, as well as fruit stress and vines covering trees were used to calculate several vegetation indices (VIs), band ratios, and VI combinations. These indices were subjected to analysis of variance (ANOVA) and an M-statistic was performed in order to quantify the separability of those classes. Significant differences in spectral values among laurel wilt affected and healthy trees were observed in all vegetation indices calculated, although the best results were achieved with Excess Red (ExR), (Red-Green) and Combination 1 (COMB1) in all locations. B/G showed a very good potential for separate the other factors with symptoms similar to laurel wilt-affected trees, such as fruit stress and vines covering trees, from laurel wilt-affected trees. These consistent results prove the usefulness of using a modified camera (RmodGB) to discriminate laurel wilt-affected avocado trees from healthy trees, as well as from other factors that cause the same symptoms and suggest performing the classification in further research. According to our results, ExR and B/G should be utilized to develop an algorithm or decision rules to classify aerial images, since they showed the highest capacity to discriminate laurel wilt-affected trees. This methodology may allow the rapid detection

  6. Model Of Reconfiguration In Component Environments

    OpenAIRE

    Jakub Grzesiak; Łukasz Jędrychowski

    2015-01-01

    The significance of the component-based software and component platforms has increased in the last 20 years. To achieve full flexibility there is a need of reconfiguration process, which allows to change parameters of system without rebuilding or restarting it. In terms of components such a process should be executed with extraordinary care as contracts between components have to be preserved. In this article model of reconfiguration and roles of the components, which are used in the process,...

  7. Store-operated channels in the pulmonary circulation of high- and low-altitude neonatal lambs.

    Science.gov (United States)

    Parrau, Daniela; Ebensperger, Germán; Herrera, Emilio A; Moraga, Fernando; Riquelme, Raquel A; Ulloa, César E; Rojas, Rodrigo T; Silva, Pablo; Hernandez, Ismael; Ferrada, Javiera; Diaz, Marcela; Parer, Julian T; Cabello, Gertrudis; Llanos, Aníbal J; Reyes, Roberto V

    2013-04-15

    We determined whether store-operated channels (SOC) are involved in neonatal pulmonary artery function under conditions of acute and chronic hypoxia, using newborn sheep gestated and born either at high altitude (HA, 3,600 m) or low altitude (LA, 520 m). Cardiopulmonary variables were recorded in vivo, with and without SOC blockade by 2-aminoethyldiphenylborinate (2-APB), during basal or acute hypoxic conditions. 2-APB did not have effects on basal mean pulmonary arterial pressure (mPAP), cardiac output, systemic arterial blood pressure, or systemic vascular resistance in both groups of neonates. During acute hypoxia 2-APB reduced mPAP and pulmonary vascular resistance in LA and HA, but this reduction was greater in HA. In addition, isolated pulmonary arteries mounted in a wire myograph were assessed for vascular reactivity. HA arteries showed a greater relaxation and sensitivity to SOC blockers than LA arteries. The pulmonary expression of two SOC-forming subunits, TRPC4 and STIM1, was upregulated in HA. Taken together, our results show that SOC contribute to hypoxic pulmonary vasoconstriction in newborn sheep and that SOC are upregulated by chronic hypoxia. Therefore, SOC may contribute to the development of neonatal pulmonary hypertension. We propose SOC channels could be potential targets to treat neonatal pulmonary hypertension.

  8. Car Detection from Low-Altitude UAV Imagery with the Faster R-CNN

    Directory of Open Access Journals (Sweden)

    Yongzheng Xu

    2017-01-01

    Full Text Available UAV based traffic monitoring holds distinct advantages over traditional traffic sensors, such as loop detectors, as UAVs have higher mobility, wider field of view, and less impact on the observed traffic. For traffic monitoring from UAV images, the essential but challenging task is vehicle detection. This paper extends the framework of Faster R-CNN for car detection from low-altitude UAV imagery captured over signalized intersections. Experimental results show that Faster R-CNN can achieve promising car detection results compared with other methods. Our tests further demonstrate that Faster R-CNN is robust to illumination changes and cars’ in-plane rotation. Besides, the detection speed of Faster R-CNN is insensitive to the detection load, that is, the number of detected cars in a frame; therefore, the detection speed is almost constant for each frame. In addition, our tests show that Faster R-CNN holds great potential for parking lot car detection. This paper tries to guide the readers to choose the best vehicle detection framework according to their applications. Future research will be focusing on expanding the current framework to detect other transportation modes such as buses, trucks, motorcycles, and bicycles.

  9. Oil palm pest infestation monitoring and evaluation by helicopter-mounted, low altitude remote sensing platform

    Science.gov (United States)

    Samseemoung, Grianggai; Jayasuriya, Hemantha P. W.; Soni, Peeyush

    2011-01-01

    Timely detection of pest or disease infections is extremely important for controlling the spread of disease and preventing crop productivity losses. A specifically designed radio-controlled helicopter mounted low altitude remote sensing (LARS) platform can offer near-real-time results upon user demand. The acquired LARS images were processed to estimate vegetative-indices and thereby detecting upper stem rot (Phellinus Noxius) disease in both young and mature oil palm plants. The indices helped discriminate healthy and infested plants by visualization, analysis and presentation of digital imagery software, which were validated with ground truth data. Good correlations and clear data clusters were obtained in characteristic plots of normalized difference vegetation index (NDVI)LARS and green normalized difference vegetation indexLARS against NDVISpectro and chlorophyll content, by which infested plants were discriminated from healthy plants in both young and mature crops. The chlorophyll content values (μmol m-2) showed notable differences among clusters for healthy young (972 to 1100), for infested young (253 to 400), for healthy mature (1210 to 1500), and for infested mature (440 to 550) oil palm. The correlation coefficients (R2) were in a reasonably acceptable range (0.62 to 0.88). The vegetation indices based on LARS images, provided satisfactory results when compared to other approaches. The developed technology showed promising scope for medium and large plantations.

  10. Automatic Registration of Low Altitude UAV Sequent Images and Laser Point Clouds

    Directory of Open Access Journals (Sweden)

    CHEN Chi

    2015-05-01

    Full Text Available It is proposed that a novel registration method for automatic co-registration of unmanned aerial vehicle (UAV images sequence and laser point clouds. Firstly, contours of building roofs are extracted from the images sequence and laser point clouds using marked point process and local salient region detection, respectively. The contours from each data are matched via back-project proximity. Secondly, the exterior orientations of the images are recovered using a linear solver based on the contours corner pairs followed by a co-planar optimization which is implicated by the matched lines form contours pairs. Finally, the exterior orientation parameters of images are further optimized by matching 3D points generated from images sequence and laser point clouds using an iterative near the point (ICP algorithm with relative movement threshold constraint. Experiments are undertaken to check the validity and effectiveness of the proposed method. The results show that the proposed method achieves high-precision co-registration of low-altitude UAV image sequence and laser points cloud robustly. The accuracy of the co-produced DOMs meets 1:500 scale standards.

  11. Low altitude unmanned aerial vehicle for characterising remediation effectiveness following the FDNPP accident

    International Nuclear Information System (INIS)

    Martin, P.G.; Payton, O.D.; Fardoulis, J.S.; Richards, D.A.; Yamashiki, Y.; Scott, T.B.

    2016-01-01

    On the 12th of March 2011, The Great Tōhoku Earthquake occurred 70 km off the eastern coast of Japan, generating a large 14 m high tsunami. The ensuing catalogue of events over the succeeding 12 d resulted in the release of considerable quantities of radioactive material into the environment. Important to the large-scale remediation of the affected areas is the accurate and high spatial resolution characterisation of contamination, including the verification of decontaminated areas. To enable this, a low altitude unmanned aerial vehicle equipped with a lightweight gamma-spectrometer and height normalisation system was used to produce sub-meter resolution maps of contamination. This system provided a valuable method to examine both contaminated and remediated areas rapidly, whilst greatly reducing the dose received by the operator, typically in localities formerly inaccessible to ground-based survey methods. The characterisation of three sites within Fukushima Prefecture is presented; one remediated (and a site of much previous attention), one un-remediated and a third having been subjected to an alternative method to reduce emitted radiation dose. - Highlights: • Contamination near FDNPP was mapped with a UAV. • Effectiveness of remediation is observed. • Sub-meter resolution mapping is achieved. • Isotopic nature of radiation is determined.

  12. Low altitude unmanned aerial vehicle for characterising remediation effectiveness following the FDNPP accident.

    Science.gov (United States)

    Martin, P G; Payton, O D; Fardoulis, J S; Richards, D A; Yamashiki, Y; Scott, T B

    2016-01-01

    On the 12th of March 2011, The Great Tōhoku Earthquake occurred 70 km off the eastern coast of Japan, generating a large 14 m high tsunami. The ensuing catalogue of events over the succeeding 12 d resulted in the release of considerable quantities of radioactive material into the environment. Important to the large-scale remediation of the affected areas is the accurate and high spatial resolution characterisation of contamination, including the verification of decontaminated areas. To enable this, a low altitude unmanned aerial vehicle equipped with a lightweight gamma-spectrometer and height normalisation system was used to produce sub-meter resolution maps of contamination. This system provided a valuable method to examine both contaminated and remediated areas rapidly, whilst greatly reducing the dose received by the operator, typically in localities formerly inaccessible to ground-based survey methods. The characterisation of three sites within Fukushima Prefecture is presented; one remediated (and a site of much previous attention), one un-remediated and a third having been subjected to an alternative method to reduce emitted radiation dose. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. ELF and VLF signatures of sprites registered onboard the low altitude satellite DEMETER

    Directory of Open Access Journals (Sweden)

    J. Błęcki

    2009-06-01

    Full Text Available We report the observation of ELF and VLF signature of sprites recorded on the low altitude satellite DEMETER during thunderstorm activity. At an altitude of ~700 km, waves observed on the E-field spectrograms at mid-to-low latitudes during night time are mainly dominated by up-going 0+ whistlers. During the night of 20 July 2007 two sprites have been observed around 20:10:08 UT from the observatory located on the top of the mountain Śnieżka in Poland (50°44'09" N, 15°44'21" E, 1603 m and, ELF and VLF data have been recorded by the satellite at about 1200 km from the region of thunderstorm activity. During this event, the DEMETER instruments were switched in the burst mode and it was possible to register the wave forms. It is shown that the two sprites have been triggered by two intense +CG lightning strokes (100 kA occurring during the same millisecond but not at the same location. Despite the distance DEMETER has recorded at the same time intense and unusual ELF and VLF emissions. It is shown that the whistler wave propagates from the thunderstorm regions in the Earth-ionosphere guide and enters in the ionosphere below the satellite. They last several tens of milliseconds and the intensity of the ELF waveform is close to 1 mV/m. A particularly intense proton whistler is also associated with these emissions.

  14. The Gravity Field of Mercury After the Messenger Low-Altitude Campaign

    Science.gov (United States)

    Mazarico, Erwan; Genova, Antonio; Goossens, Sander; Lemoine, Frank G.; Smith, David E.; Zuber, Maria T.; Neumann, Gary A.; Solomon, Sean C.

    2015-01-01

    The final year of the MESSENGER mission was designed to take advantage of the remaining propellant onboard to provide a series of lowaltitude observation campaigns and acquire novel scientific data about the innermost planet. The lower periapsis altitude greatly enhances the sensitivity to the short-wavelength gravity field, but only when the spacecraft is in view of Earth. After more than 3 years in orbit around Mercury, the MESSENGER spacecraft was tracked for the first time below 200-km altitude on 5 May 2014 by the NASA Deep Space Network (DSN). Between August and October, periapsis passages down to 25-km altitude were routinely tracked. These periods considerably improved the quality of the data coverage. Before the end of its mission, MESSENGER will fly at very low altitudes for extended periods of time. Given the orbital geometry, however the periapses will not be visible from Earth and so no new tracking data will be available for altitudes lower than 75 km. Nevertheless, the continuous tracking of MESSENGER in the northern hemisphere will help improve the uniformity of the spatial coverage at altitudes lower than 150 km, which will further improve the overall quality of the Mercury gravity field.

  15. The Advantage by Using Low-Altitude UAV for Sustainable Urban Development Control

    Science.gov (United States)

    Djimantoro, Michael I.; Suhardjanto, Gatot

    2017-12-01

    The City will always grow and develop along with the increasing number of population which affect more demands of building space in the city. Thoserequirements of development can be done by the government, the private sector or by the individual sectors, but it needs to follow the ordinance which is set in the city plan to avoid the adverse negative impact in the future. The problems are if the monitored development in the city is like Jakarta - Indonesia, which have an area of 661 square kilometres compared with the limitation of government employee source. Therefore, it is important to advancing the new tools to monitor the development of the city, due to the large development area and the limitation of source. This research explores the using of Low-altitude UAV (Unmanned Aerial Vehicle) combined with photogrammetry techniques - a new rapidly developing technology - to collect as-built building development information in real time, cost-effective and efficient manner. The result of this research explores the possibility of using the UAV in sustainable urban development control and it can detect the anomalies of the development.

  16. Inferring electromagnetic ion cyclotron wave intensity from low altitude POES proton flux measurements: A detailed case study with conjugate Van Allen Probes observations

    Science.gov (United States)

    Zhang, Yang; Shi, Run; Ni, Binbin; Gu, Xudong; Zhang, Xianguo; Zuo, Pingbing; Fu, Song; Xiang, Zheng; Wang, Qi; Cao, Xing; Zou, Zhengyang

    2017-03-01

    Electromagnetic ion cyclotron (EMIC) waves play an important role in the magnetospheric particle dynamics and can lead to resonant pitch-angle scattering and ultimate precipitation of ring current protons. Commonly, the statistics of in situ EMIC wave measurements is adopted for quantitative investigation of wave-particle interaction processes, which however becomes questionable for detailed case studies especially during geomagnetic storms and substorms. Here we establish a novel technique to infer EMIC wave amplitudes from low-altitude proton measurements onboard the Polar Operational Environmental Satellites (POES). The detailed procedure is elaborated regarding how to infer the EMIC wave intensity for one specific time point. We then test the technique with a case study comparing the inferred root-mean-square (RMS) EMIC wave amplitude with the conjugate Van Allen Probes EMFISIS wave measurements. Our results suggest that the developed technique can reasonably estimate EMIC wave intensities from low-altitude POES proton flux data, thereby providing a useful tool to construct a data-based, near-real-time, dynamic model of the global distribution of EMIC waves once the proton flux measurements from multiple POES satellites are available for any specific time period.

  17. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  18. DETERMINING SPECTRAL REFLECTANCE COEFFICIENTS FROM HYPERSPECTRAL IMAGES OBTAINED FROM LOW ALTITUDES

    Directory of Open Access Journals (Sweden)

    P. Walczykowski

    2016-06-01

    Full Text Available Remote Sensing plays very important role in many different study fields, like hydrology, crop management, environmental and ecosystem studies. For all mentioned areas of interest different remote sensing and image processing techniques, such as: image classification (object and pixel- based, object identification, change detection, etc. can be applied. Most of this techniques use spectral reflectance coefficients as the basis for the identification and distinction of different objects and materials, e.g. monitoring of vegetation stress, identification of water pollutants, yield identification, etc. Spectral characteristics are usually acquired using discrete methods such as spectrometric measurements in both laboratory and field conditions. Such measurements however can be very time consuming, which has led many international researchers to investigate the reliability and accuracy of using image-based methods. According to published and ongoing studies, in order to acquire these spectral characteristics from images, it is necessary to have hyperspectral data. The presented article describes a series of experiments conducted using the push-broom Headwall MicroHyperspec A-series VNIR. This hyperspectral scanner allows for registration of images with more than 300 spectral channels with a 1.9 nm spectral bandwidth in the 380- 1000 nm range. The aim of these experiments was to establish a methodology for acquiring spectral reflectance characteristics of different forms of land cover using such sensor. All research work was conducted in controlled conditions from low altitudes. Hyperspectral images obtained with this specific type of sensor requires a unique approach in terms of post-processing, especially radiometric correction. Large amounts of acquired imagery data allowed the authors to establish a new post- processing approach. The developed methodology allowed the authors to obtain spectral reflectance coefficients from a hyperspectral sensor

  19. On infrasound generated by wind farms and its propagation in low-altitude tropospheric waveguides

    Science.gov (United States)

    Marcillo, Omar; Arrowsmith, Stephen; Blom, Philip; Jones, Kyle

    2015-10-01

    Infrasound from a 60-turbine wind farm was found to propagate to distances up to 90 km under nighttime atmospheric conditions. Four infrasound sensor arrays were deployed in central New Mexico in February 2014; three of these arrays captured infrasound from a large wind farm. The arrays were in a linear configuration oriented southeast with 13, 54, 90, and 126 km radial distances and azimuths of 166°, 119°, 113°, and 111° from the 60 1.6 MW turbine Red Mesa Wind Farm, Laguna Pueblo, New Mexico, USA. Peaks at a fundamental frequency slightly below 0.9 Hz and its harmonics characterize the spectrum of the detected infrasound. The generation of this signal is linked to the interaction of the blades, flow gradients, and the supporting tower. The production of wind-farm sound, its propagation, and detection at long distances can be related to the characteristics of the atmospheric boundary layer. First, under stable conditions, mostly occurring at night, winds are highly stratified, which enhances the production of thickness sound and the modulation of other higher-frequency wind turbine sounds. Second, nocturnal atmospheric conditions can create low-altitude waveguides (with altitudes on the order of hundreds of meters) allowing long-distance propagation. Third, night and early morning hours are characterized by reduced background atmospheric noise that enhances signal detectability. This work describes the characteristics of the infrasound from a quasi-continuous source with the potential for long-range propagation that could be used to monitor the lower part of the atmospheric boundary layer.

  20. Propagation of whistler-mode chorus to low altitudes: divergent ray trajectories and ground accessibility

    Directory of Open Access Journals (Sweden)

    J. Chum

    2005-12-01

    Full Text Available We investigate the ray trajectories of nonductedly propagating lower-band chorus waves with respect to their initial angle θ0, between the wave vector and ambient magnetic field. Although we consider a wide range of initial angles θ0, in order to be consistent with recent satellite observations, we pay special attention to the intervals of initial angles θ0, for which the waves propagate along the field lines in the source region, i.e. we mainly focus on waves generated with &theta0 within an interval close to 0° and on waves generated within an interval close to the Gendrin angle. We demonstrate that the ray trajectories of waves generated within an interval close to the Gendrin angle with a wave vector directed towards the lower L-shells (to the Earth significantly diverge at the frequencies typical for the lower-band chorus. Some of these diverging trajectories reach the topside ionosphere having θ close to 0°; thus, a part of the energy may leak to the ground at higher altitudes where the field lines have a nearly vertical direction. The waves generated with different initial angles are reflected. A small variation of the initial wave normal angle thus very dramatically changes the behaviour of the resulting ray. Although our approach is rather theoretical, based on the ray tracing simulation, we show that the initial angle θ0 of the waves reaching the ionosphere (possibly ground is surprisingly close - differs just by several degrees from the initial angles which fits the observation of magnetospherically reflected chorus revealed by CLUSTER satellites. We also mention observations of diverging trajectories on low altitude satellites.

  1. Determining Spectral Reflectance Coefficients from Hyperspectral Images Obtained from Low Altitudes

    Science.gov (United States)

    Walczykowski, P.; Jenerowicz, A.; Orych, A.; Siok, K.

    2016-06-01

    Remote Sensing plays very important role in many different study fields, like hydrology, crop management, environmental and ecosystem studies. For all mentioned areas of interest different remote sensing and image processing techniques, such as: image classification (object and pixel- based), object identification, change detection, etc. can be applied. Most of this techniques use spectral reflectance coefficients as the basis for the identification and distinction of different objects and materials, e.g. monitoring of vegetation stress, identification of water pollutants, yield identification, etc. Spectral characteristics are usually acquired using discrete methods such as spectrometric measurements in both laboratory and field conditions. Such measurements however can be very time consuming, which has led many international researchers to investigate the reliability and accuracy of using image-based methods. According to published and ongoing studies, in order to acquire these spectral characteristics from images, it is necessary to have hyperspectral data. The presented article describes a series of experiments conducted using the push-broom Headwall MicroHyperspec A-series VNIR. This hyperspectral scanner allows for registration of images with more than 300 spectral channels with a 1.9 nm spectral bandwidth in the 380- 1000 nm range. The aim of these experiments was to establish a methodology for acquiring spectral reflectance characteristics of different forms of land cover using such sensor. All research work was conducted in controlled conditions from low altitudes. Hyperspectral images obtained with this specific type of sensor requires a unique approach in terms of post-processing, especially radiometric correction. Large amounts of acquired imagery data allowed the authors to establish a new post- processing approach. The developed methodology allowed the authors to obtain spectral reflectance coefficients from a hyperspectral sensor mounted on an

  2. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  3. DOA Estimation of Low Altitude Target Based on Adaptive Step Glowworm Swarm Optimization-multiple Signal Classification Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Hao

    2015-06-01

    Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.

  4. Mercury's Hollows: New Information on Distribution and Morphology from MESSENGER Observations at Low Altitude

    Science.gov (United States)

    Blewett, D. T.; Stadermann, A. C.; Chabot, N. L.; Denevi, B. W.; Ernst, C. M.; Peplowski, P. N.

    2014-12-01

    MESSENGER's orbital mission at Mercury led to the discovery of an unusual landform not known from other airless rocky bodies of the Solar System. Hollows are irregularly shaped, shallow, rimless depressions, often occurring in clusters and with high-reflectance interiors and halos. The fresh appearance of hollows suggests that they are relatively young features. For example, hollows are uncratered, and talus aprons downslope of hollows in certain cases appear to be covering small impact craters (100-200 in diameter). Hence, some hollows may be actively forming at present. The characteristics of hollows are suggestive of formation via destruction of a volatile-bearing phase (possibly one or more sulfides) through solar heating, micrometeoroid bombardment, and/or ion impact. Previous analysis showed that hollows are associated with low-reflectance material (LRM), a color unit identified from global color images. The material hosting hollows has often been excavated from depth by basin or crater impacts. Hollows are small features (tens of meters to several kilometers), so their detection and characterization with MESSENGER's global maps have been limited. MESSENGER's low-altitude orbits provide opportunities for collection of images at high spatial resolutions, which reveal new occurrences of hollows and offer views of hollows with unprecedented detail. As of this writing, we have examined more than 21,000 images with pixel sizes Shadow-length measurements were made on 280 images, yielding the depths of 1343 individual hollows. The mean depth is 30 m, with a standard deviation of 17 m. We also explored correlations between the geographic locations of hollows and maps provided by the MESSENGER geochemical sensors (X-Ray, Gamma-Ray, and Neutron Spectrometers), including the abundances of Al/Si, Ca/Si, Fe/Si, K, Mg/Si, and S/Si, as well as total neutron cross-section. No clear compositional trends emerged; it is likely that any true compositional preference for terrain

  5. Unmanned Aircraft System (UAS) Traffic Management (UTM): Enabling Civilian Low-Altitude Airspace and Unmanned Aerial System Operations

    Science.gov (United States)

    Kopardekar, Parimal Hemchandra

    2016-01-01

    Just a year ago we laid out the UTM challenges and NASA's proposed solutions. During the past year NASA's goal continues to be to conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line-of-sight UAS operations in the low-altitude airspace. Significant progress has been made, and NASA is continuing to move forward.

  6. Multi-sensor field trials for detection and tracking of multiple small unmanned aerial vehicles flying at low altitude

    Science.gov (United States)

    Laurenzis, Martin; Hengy, Sebastien; Hommes, Alexander; Kloeppel, Frank; Shoykhetbrod, Alex; Geibig, Thomas; Johannes, Winfried; Naz, Pierre; Christnacher, Frank

    2017-05-01

    Small unmanned aerial vehicles (UAV) flying at low altitude are becoming more and more a serious threat in civilian and military scenarios. In recent past, numerous incidents have been reported where small UAV were flying in security areas leading to serious danger to public safety or privacy. The detection and tracking of small UAV is a widely discussed topic. Especially, small UAV flying at low altitude in urban environment or near background structures and the detection of multiple UAV at the same time is challenging. Field trials were carried out to investigate the detection and tracking of multiple UAV flying at low altitude with state of the art detection technologies. Here, we present results which were achieved using a heterogeneous sensor network consisting of acoustic antennas, small frequency modulated continuous wave (FMCW) RADAR systems and optical sensors. While acoustics, RADAR and LiDAR were applied to monitor a wide azimuthal area (360°) and to simultaneously track multiple UAV, optical sensors were used for sequential identification with a very narrow field of view.

  7. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing.

    Science.gov (United States)

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-09-30

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA

  8. Modeling accelerator structures and RF components

    International Nuclear Information System (INIS)

    Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.

    1993-03-01

    Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included

  9. A principal components model of soundscape perception.

    Science.gov (United States)

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  10. Breeding for Increased Water Use Efficiency in Corn (Maize) Using a Low-altitude Unmanned Aircraft System

    Science.gov (United States)

    Shi, Y.; Veeranampalayam-Sivakumar, A. N.; Li, J.; Ge, Y.; Schnable, J. C.; Rodriguez, O.; Liang, Z.; Miao, C.

    2017-12-01

    Low-altitude aerial imagery collected by unmanned aircraft systems (UAS) at centimeter-level spatial resolution provides great potential to collect high throughput plant phenotyping (HTP) data and accelerate plant breeding. This study is focused on UAS-based HTP for breeding increased water use efficiency in corn in eastern Nebraska. The field trail is part of an effort by the Genomes to Fields consortium effort to grow and phenotype many of the same corn (maize) hybrids at approximately 40 locations across the United States and Canada in order to stimulate new research in crop modeling, the development of new plant phenotyping technologies and the identification of genetic loci that control the adaptation of specific corn (maize) lines to specific environments. It included approximately 250 maize hybrids primary generated using recently off patent material from major seed companies. These lines are the closest material to what farmers are growing today which can be legally used for research purposes and genotyped by the public sector. During the growing season, a hexacopter equipped with a multispectral and a RGB cameras was flown and used to image this 1-hectare field trial near Mead, NE. Sensor data from the UAS were correlated directly with grain yield, measured at the end of the growing season, and were also be used to quantify other traits of interest to breeders including flowering date, plant height, leaf orientation, canopy spectral, and stand count. The existing challenges of field data acquisition (to ensure data quality) and development of effective image processing algorithms (such as detecting corn tassels) will be discussed. The success of this study and others like it will speed up the process of phenotypic data collection, and provide more accurate and detailed trait data for plant biologists, plant breeders, and other agricultural scientists. Employing advanced UAS-based machine vision technologies in agricultural applications have the potential

  11. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  12. Overview of the model component in ECOCLIM

    DEFF Research Database (Denmark)

    Geels, Camilla; Boegh, Eva; Bendtsen, J

    and atmospheric models. We will use the model system to 1) quantify the potential effects of climate change on ecosystem exchange of GHG and 2) estimate the impacts of changes in management practices including land use change and nitrogen (N) loads. Here the various model components will be introduced......As part of the Danish strategic research project ECOCLIM: Ecosystems Surface Exchange of Greenhouse Gases in an Environment of Changing Anthropogenic and Climate forcing a model system will be developed. This model system will be based on both terrestrial and marine ecosystems in order to be able...... to describe the exchange of GHG with main focus on carbon dioxide (CO2) and the exchange above the Danish terrestrial biosphere as well as above Danish waters including fjords. The construction of the model system is based on data from new, existing and previous field experiments and on improved ecosystem...

  13. Increased Hypoxic Dose After Training at Low Altitude with 9h Per Night at 3000m Normobaric Hypoxia.

    Science.gov (United States)

    Carr, Amelia J; Saunders, Philo U; Vallance, Brent S; Garvican-Lewis, Laura A; Gore, Christopher J

    2015-12-01

    This study examined effects of low altitude training and a live-high: train-low protocol (combining both natural and simulated modalities) on haemoglobin mass (Hbmass), maximum oxygen consumption (VO2max), time to exhaustion, and submaximal exercise measures. Eighteen elite-level race-walkers were assigned to one of two experimental groups; lowHH (low Hypobaric Hypoxia: continuous exposure to 1380 m for 21 consecutive days; n = 10) or a combined low altitude training and nightly Normobaric Hypoxia (lowHH+NHnight: living and training at 1380 m, plus 9 h.night(-1) at a simulated altitude of 3000 m using hypoxic tents; n = 8). A control group (CON; n = 10) lived and trained at 600 m. Measurement of Hbmass, time to exhaustion and VO2max was performed before and after the training intervention. Paired samples t-tests were used to assess absolute and percentage change pre and post-test differences within groups, and differences between groups were assessed using a one-way ANOVA with least significant difference post-hoc testing. Statistical significance was tested at p altitude (1380 m) combined with sleeping in altitude tents (3000 m) as one effective alternative to traditional altitude training methods, which can improve Hbmass. Key pointsIn some countries, it may not be possible to perform classical altitude training effectively, due to the low elevation at altitude training venues. An additional hypoxic stimulus can be provided by simulating higher altitudes overnight, using altitude tents.Three weeks of combined (living and training at 1380 m) and simulated altitude exposure (at 3000 m) can improve haemoglobin mass by over 3% in comparison to control values, and can also improve time to exhaustion by ~9% in comparison to baseline.We recommend that, in the context of an altitude training camp at low altitudes (~1400 m) the addition of a relatively short exposure to simulated altitudes of 3000 m can elicit physiological and performance benefits, without compromise to

  14. Pool scrubbing models for iodine components

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, K. [Battelle Ingenieurtechnik GmbH, Eschborn (Germany)

    1996-12-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs.

  15. Pool scrubbing models for iodine components

    International Nuclear Information System (INIS)

    Fischer, K.

    1996-01-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs

  16. Computational needs for modelling accelerator components

    International Nuclear Information System (INIS)

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs

  17. Increased Hypoxic Dose After Training at Low Altitude with 9h Per Night at 3000m Normobaric Hypoxia

    Directory of Open Access Journals (Sweden)

    Amelia J. Carr, Philo U. Saunders, Brent S. Vallance, Laura A. Garvican-Lewis, Christopher J. Gore

    2015-12-01

    Full Text Available This study examined effects of low altitude training and a live-high: train-low protocol (combining both natural and simulated modalities on haemoglobin mass (Hbmass, maximum oxygen consumption (VO2max, time to exhaustion, and submaximal exercise measures. Eighteen elite-level race-walkers were assigned to one of two experimental groups; lowHH (low Hypobaric Hypoxia: continuous exposure to 1380 m for 21 consecutive days; n = 10 or a combined low altitude training and nightly Normobaric Hypoxia (lowHH+NHnight: living and training at 1380 m, plus 9 h.night-1 at a simulated altitude of 3000 m using hypoxic tents; n = 8. A control group (CON; n = 10 lived and trained at 600 m. Measurement of Hbmass, time to exhaustion and VO2max was performed before and after the training intervention. Paired samples t-tests were used to assess absolute and percentage change pre and post-test differences within groups, and differences between groups were assessed using a one-way ANOVA with least significant difference post-hoc testing. Statistical significance was tested at p < 0.05. There was a 3.7% increase in Hbmass in lowHH+NHnight compared with CON (p = 0.02. In comparison to baseline, Hbmass increased by 1.2% (±1.4% in the lowHH group, 2.6% (±1.8% in lowHH+NHnight, and there was a decrease of 0.9% (±4.9% in CON. VO2max increased by ~4% within both experimental conditions but was not significantly greater than the 1% increase in CON. There was a ~9% difference in pre and post-intervention values in time to exhaustion after lowHH+NH-night (p = 0.03 and a ~8% pre to post-intervention difference (p = 0.006 after lowHH only. We recommend low altitude (1380 m combined with sleeping in altitude tents (3000 m as one effective alternative to traditional altitude training methods, which can improve Hbmass.

  18. Creation and usage of component model in projecting information systems

    OpenAIRE

    Urbonas, Paulius

    2004-01-01

    The purpose of this project was to create the information system, using component model. Making new information systems, often the same models are building. Realizing system with component model in creating new system it‘s possible to use the old components. To describe advantages of component model information system was created for company “Vilseda”. If the created components used in future, they have been projected according to theirs types(grafical user interface, data and function reques...

  19. Food composition of some low altitude Lissotriton montandoni (Amphibia, Caudata populations from North-Western Romania

    Directory of Open Access Journals (Sweden)

    Covaciu-Marcov S.D.

    2010-01-01

    Full Text Available The diet of some populations of Lissotriton montandoni from north-western Romania is composed of prey belonging to 20 categories. The food components of the Carpathian newts are similar to those of other species of newts. Most of the prey are aquatic animals, but terrestrial prey also has a high percentage abundance. The consumed prey categories are common in the newts' habitats as well, but in natural ponds the prey item with the highest abundance in the diet is not the most frequent one in the habitat. Thus, although the Carpathian newts are basically opportunistic predators, they still display a certain trophic selectivity.

  20. Strong localized variations of the low-altitude energetic electron fluxes in the evening sector near the plasmapause

    Directory of Open Access Journals (Sweden)

    E. E. Titova

    1998-01-01

    Full Text Available Specific type of energetic electron precipitation accompanied by a sharp increase in trapped energetic electron flux are found in the data obtained from low-altitude NOAA satellites. These strongly localized variations of the trapped and precipitated energetic electron flux have been observed in the evening sector near the plasmapause during recovery phase of magnetic storms. Statistical characteristics of these structures as well as the results of comparison with proton precipitation are described. We demonstrate the spatial coincidence of localized electron precipitation with cold plasma gradient and whistler wave intensification measured on board the DE-1 and Aureol-3 satellites. A simultaneous localized sharp increase in both trapped and precipitating electron flux could be a result of significant pitch-angle isotropization of drifting electrons due to their interaction via cyclotron instability with the region of sharp increase in background plasma density.Key words. Ionosphere (particle precipitation; wave-particle interaction Magnetospheric Physics (plasmasphere

  1. The usefulness of low-altitude aerial photography for the assessment of channel morphodynamics of a lowland river

    Directory of Open Access Journals (Sweden)

    Ostrowski Piotr

    2017-06-01

    Full Text Available The paper presents examples of using low-altitude aerial images of a modern river channel, acquired from an ultralight aircraft. The images have been taken for two sections of the Vistula river: in the Małopolska Gorge and near Dęblin and Gołąb. Alongside with research flights, there were also terrestrial investigations, such as echo sounding of the riverbed and geological mapping, carried out in the river channel zone. A comparison of the results of aerial and terrestrial research revealed high clarity of the images, allowing for precise identification of the evidence that indicates the specific course of river channel processes. Aerial images taken from ultralight aircrafts can significantly increase the accuracy of geological surveys of river channel zones in the Polish Lowlands due to low logistic requirements.

  2. Monitoring and Estimation of Soil Losses from Ephemeral Gully Erosion in Mediterranean Region Using Low Altitude Unmanned Aerial Vehicles

    Science.gov (United States)

    Gündoğan, R.; Alma, V.; Dindaroğlu, T.; Günal, H.; Yakupoğlu, T.; Susam, T.; Saltalı, K.

    2017-11-01

    Calculation of gullies by remote sensing images obtained from satellite or aerial platforms is often not possible because gullies in agricultural fields, defined as the temporary gullies are filled in a very short time with tillage operations. Therefore, fast and accurate estimation of sediment loss with the temporary gully erosion is of great importance. In this study, it is aimed to monitor and calculate soil losses caused by the gully erosion that occurs in agricultural areas with low altitude unmanned aerial vehicles. According to the calculation with Pix4D, gully volume was estimated to be 10.41 m3 and total loss of soil was estimated to be 14.47 Mg. The RMSE value of estimations was found to be 0.89. The results indicated that unmanned aerial vehicles could be used in predicting temporary gully erosion and losses of soil.

  3. An emergency medical communications system by low altitude platform at the early stages of a natural disaster in Indonesia.

    Science.gov (United States)

    Qiantori, Andri; Sutiono, Agung Budi; Hariyanto, Hadi; Suwa, Hirohiko; Ohta, Toshizumi

    2012-02-01

    A natural disaster is a consequence of a natural hazard, such as a tsunami, earthquake or volcanic eruption, affecting humans. In order to support emergency medical communication services in natural disaster areas where the telecommunications facility has been seriously damaged, an ad hoc communication network backbone should be build to support emergency medical services. Combinations of requirements need to be considered before deciding on the best option. In the present study we have proposed a Low Altitude Platform consisting of tethered balloons combined with Wireless Fidelity (WiFi) 802.11 technology. To confirm that the suggested network would satisfy the emergency medical service requirements, a communications experiment, including performance service measurement, was carried out.

  4. Control of respiration in flight muscle from the high-altitude bar-headed goose and low-altitude birds.

    Science.gov (United States)

    Scott, Graham R; Richards, Jeffrey G; Milsom, William K

    2009-10-01

    Bar-headed geese fly at altitudes of up to 9,000 m on their biannual migration over the Himalayas. To determine whether the flight muscle of this species has evolved to facilitate exercise at high altitude, we compared the respiratory properties of permeabilized muscle fibers from bar-headed geese and several low-altitude waterfowl species. Respiratory capacities were assessed for maximal ADP stimulation (with single or multiple inputs to the electron transport system) and cytochrome oxidase excess capacity (with an exogenous electron donor) and were generally 20-40% higher in bar-headed geese when creatine was present. When respiration rates were extrapolated to the entire pectoral muscle mass, bar-headed geese had a higher mass-specific aerobic capacity. This may represent a surplus capacity that counteracts the depressive effects of hypoxia on mitochondrial respiration. However, there were no differences in activity for mitochondrial or glycolytic enzymes measured in homogenized muscle. The [ADP] leading to half-maximal stimulation (K(m)) was approximately twofold higher in bar-headed geese (10 vs. 4-6 microM), and, while creatine reduced K(m) by 30% in this species, it had no effect on K(m) in low-altitude birds. Mitochondrial creatine kinase may therefore contribute to the regulation of oxidative phosphorylation in flight muscle of bar-headed geese, which could promote efficient coupling of ATP supply and demand. However, this was not based on differences in creatine kinase activity in isolated mitochondria or homogenized muscle. The unique differences in bar-headed geese existed without prior exercise or hypoxia exposure and were not a result of phylogenetic history, and may, therefore, be important evolutionary specializations for high-altitude flight.

  5. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  6. On the variability of I(7620 Å/I(5577 Å in low altitude aurora

    Directory of Open Access Journals (Sweden)

    E. J. Llewellyn

    Full Text Available An auroral electron excitation model, combined with simple equilibrium neutral and ion chemistry models, is used to investigate the optical emission processes and height profiles of I(5577 Å and I(7620 Å in the 90 to 100 km altitude region. It is shown that the apparent discrepancies between ground-based and rocket-borne auroral observations of the I(7620 Å/I(5577 Å ratio are due to the extreme height variation of this intensity ratio in the 90 to 100 km region.

    Key words. Atmospheric composition and structure (airglow and aurora

  7. On the variability of I(7620 Å/I(5577 Å in low altitude aurora

    Directory of Open Access Journals (Sweden)

    E. J. Llewellyn

    1999-07-01

    Full Text Available An auroral electron excitation model, combined with simple equilibrium neutral and ion chemistry models, is used to investigate the optical emission processes and height profiles of I(5577 Å and I(7620 Å in the 90 to 100 km altitude region. It is shown that the apparent discrepancies between ground-based and rocket-borne auroral observations of the I(7620 Å/I(5577 Å ratio are due to the extreme height variation of this intensity ratio in the 90 to 100 km region.Key words. Atmospheric composition and structure (airglow and aurora

  8. High-Resolution 3D Bathymetric Mapping for Small Streams Using Low-Altitude Aerial Photography

    Science.gov (United States)

    Dietrich, J. T.; Duffin, J.

    2015-12-01

    Geomorphic monitoring of river restoration projects is a critical component of measuring their success. In smaller streams, with depths less than 2 meters, one of the more difficult variables to map at high-resolution is bathymetry. In larger rivers, bathymetry can be measured with instruments like multi-beam sonar, bathymetric airborne LiDAR, or acoustic doppler current profilers (ADCP). However, these systems are often limited by their minimum operating depths, which makes them ineffective in shallow water. Remote sensing offers several potential solutions for collecting bathymetry, spectral depth mapping and photogrammetric measurement (e.g. Structure-from-Motion (SfM) multi-view photogrammetry). In this case study, we use SfM to produce both high-resolution above water topography and below water bathymetry for two reaches of a stream restoration project on the Middle Fork of the John Day River in eastern Oregon and one reach on the White River in Vermont. We collected low-allitude multispectral (RGB+NIR) aerial photography at all of the sites at altitudes of 30 to 50 meters. The SfM survey was georeferenced with RTK-GPS ground control points and the bathymetry was refraction-corrected using additional RTK-GPS sample points. The resulting raster data products have horizontal resolutions of ~4-8 centimeters for the topography and ~8-15 cm for the bathymetry. This methodology, like many fluvial remote sensing methods, will only work under ideal conditions (e.g. clear water), but it provides an additional tool for collecting high-resolution bathymetric datasets for geomorphic monitoring efforts.

  9. Radioisotope Stirling Engine Powered Airship for Low Altitude Operation on Venus

    Science.gov (United States)

    Colozza, Anthony J.

    2012-01-01

    The feasibility of a Stirling engine powered airship for the near surface exploration of Venus was evaluated. The heat source for the Stirling engine was limited to 10 general purpose heat source (GPHS) blocks. The baseline airship utilized hydrogen as the lifting gas and the electronics and payload were enclosed in a cooled insulated pressure vessel to maintain the internal temperature at 320 K and 1 Bar pressure. The propulsion system consisted of an electric motor driving a propeller. An analysis was set up to size the airship that could operate near the Venus surface based on the available thermal power. The atmospheric conditions on Venus were modeled and used in the analysis. The analysis was an iterative process between sizing the airship to carry a specified payload and the power required to operate the electronics, payload and cooling system as well as provide power to the propulsion system to overcome the drag on the airship. A baseline configuration was determined that could meet the power requirements and operate near the Venus surface. From this baseline design additional trades were made to see how other factors affected the design such as the internal temperature of the payload chamber and the flight altitude. In addition other lifting methods were evaluated such as an evacuated chamber, heated atmospheric gas and augmented heated lifting gas. However none of these methods proved viable.

  10. Stable Imaging and Accuracy Issues of Low-Altitude Unmanned Aerial Vehicle Photogrammetry Systems

    Directory of Open Access Journals (Sweden)

    Ying Yang

    2016-04-01

    Full Text Available Stable imaging of an unmanned aerial vehicle (UAV photogrammetry system is an important issue that affects the data processing and application of the system. Compared with traditional aerial images, the large rotation of roll, pitch, and yaw angles of UAV images decrease image quality and result in image deformation, thereby affecting the ground resolution, overlaps, and the consistency of the stereo models. These factors also cause difficulties in automatic tie point matching, image orientation, and accuracy of aerial triangulation (AT. The issues of large-angle photography of UAV photogrammetry system are discussed and analyzed quantitatively in this paper, and a simple and lightweight three-axis stabilization platform that works with a low-precision integrated inertial navigation system and a three-axis mechanical platform is used to reduce this problem. An experiment was carried out with an airship as the flight platform. Another experimental dataset, which was acquired by the same flight platform without a stabilization platform, was utilized for a comparative test. Experimental results show that the system can effectively isolate the swing of the flying platform. To ensure objective and reliable results, another group of experimental datasets, which were acquired using a fixed-wing UAV platform, was also analyzed. Statistical results of the experimental datasets confirm that stable imaging of a UAV platform can help improve the quality of aerial photography imagery and the accuracy of AT, and potentially improve the application of images acquired by a UAV.

  11. Accurate modelling of UV written waveguide components

    DEFF Research Database (Denmark)

    Svalgaard, Mikael

    BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure.......BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure....

  12. Accurate modeling of UV written waveguide components

    DEFF Research Database (Denmark)

    Svalgaard, Mikael

    BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure.......BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure....

  13. High resolution three-dimensional magnetization mapping in Tokachidake Volcano using low altitude airborne magnetic survey data

    Science.gov (United States)

    Iwata, M.; Mogi, T.; Okuma, S.; Nakatsuka, T.

    2016-12-01

    Tokachidake Volcano, central Hokkaido, Japan erupted in 1926, 1962 and 1988-1989 in the 20th century from the central part. In recent years, expansions of the edifice of the volcano at shallow depth and increases of the volcanic smoke in the Taisho crater were observed (Meteorological Agency of Japan, 2014). Magnetic changes were observed at the 62-2 crater by repeated magnetic measurements in 2008-2009, implying a demagnetization beneath the crater (Hashimoto at al., 2010). Moreover, a very low resistivity part was found right under the 62-2 crater from an AMT survey (Yamaya et al., 2010). However, since the station numbers of the survey are limited, the area coverage is not sufficient. In this study, we have re-analyzed high-resolution aeromagnetic data to delineate the three-dimensional magnetic structure of the volcano to understand the nature of other craters.A low altitude airborne magnetic survey was conducted in 2014 mainly over the active areas of the volcano by the Ministry of Land, Infrastructure, Transport and Tourism to manage land slide risk in the volcano. The survey was flown at an altitude of 60 m above ground by a helicopter with a Cesium magnetometer in the towed-bird 30m below the helicopter. The low altitude survey enables us to delineate the detailed magnetic structure. We calculated magnetic anomaly distribution on a smooth surface assuming equivalent anomalies below the observation surface. Then the 3D magnetic imaging method (Nakatsuka and Okuma, 2014) was applied to the magnetic anomalies to reveal the three-dimensional magnetic structure.As a result, magnetization highs were seen beneath the Ground crater, Suribachi crater and Kitamuki crater. This implies that magmatic activity occurred in the past at these craters. These magma should have already solidified and acquired strong remanent magnetization. Relative magnetization lows were seen beneath the 62-2 crater and the Taisho crater where fumarolic activity is active. However a

  14. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae; Top, Søren

    2008-01-01

    to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set......Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behaviour as a means of computation...

  15. A low-altitude mountain range as an important refugium for two narrow endemics in the Southwest Australian Floristic Region biodiversity hotspot

    Science.gov (United States)

    Robinson, Todd P.; Wardell-Johnson, Grant W.; Yates, Colin J.; Van Niel, Kimberly P.; Byrne, Margaret; Schut, Antonius G. T.

    2017-01-01

    Background and Aims Low-altitude mountains constitute important centres of diversity in landscapes with little topographic variation, such as the Southwest Australian Floristic Region (SWAFR). They also provide unique climatic and edaphic conditions that may allow them to function as refugia. We investigate whether the Porongurups (altitude 655 m) in the SWAFR will provide a refugium for the endemic Ornduffia calthifolia and O. marchantii under forecast climate change. Methods We used species distribution modelling based on WorldClim climatic data, 30-m elevation data and a 2-m-resolution LiDAR-derived digital elevation model (DEM) to predict current and future distributions of the Ornduffia species at local and regional scales based on 605 field-based abundance estimates. Future distributions were forecast using RCP2.6 and RCP4.5 projections. To determine whether local edaphic and biotic factors impact these forecasts, we tested whether soil depth and vegetation height were significant predictors of abundance using generalized additive models (GAMs). Key Results Species distribution modelling revealed the importance of elevation and topographic variables at the local scale for determining distributions of both species, which also preferred shadier locations and higher slopes. However, O. calthifolia occurred at higher (cooler) elevations with rugged, concave topography, while O. marchantii occurred in disturbed sites at lower locations with less rugged, convex topography. Under future climates both species are likely to severely contract under the milder RCP2.6 projection (approx. 2 °C of global warming), but are unlikely to persist if warming is more severe (RCP4.5). GAMs showed that soil depth and vegetation height are important predictors of O. calthifolia and O. marchantii distributions, respectively. Conclusions The Porongurups constitute an important refugium for O. calthifolia and O. marchantii, but limits to this capacity may be reached if global

  16. Modelling, design and realization of microfluidic components

    NARCIS (Netherlands)

    Oosterbroek, R.E.

    1999-01-01

    During the last decades, miniaturization of electrical components and systems has assumed large proportions. The reason for these developments is the application of etch and deposition techniques in the IC-production (integrated circuit), which allows a large amount of functionality per surface

  17. Applications of Low Altitude Remote Sensing in Agriculture upon Farmers' Requests– A Case Study in Northeastern Ontario, Canada

    Science.gov (United States)

    Zhang, Chunhua; Walters, Dan; Kovacs, John M.

    2014-01-01

    With the growth of the low altitude remote sensing (LARS) industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV). Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind) on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams. PMID:25386696

  18. Applications of low altitude remote sensing in agriculture upon farmers' requests--a case study in northeastern Ontario, Canada.

    Science.gov (United States)

    Zhang, Chunhua; Walters, Dan; Kovacs, John M

    2014-01-01

    With the growth of the low altitude remote sensing (LARS) industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV). Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind) on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams.

  19. Applications of low altitude remote sensing in agriculture upon farmers' requests--a case study in northeastern Ontario, Canada.

    Directory of Open Access Journals (Sweden)

    Chunhua Zhang

    Full Text Available With the growth of the low altitude remote sensing (LARS industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV. Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams.

  20. An integrated Rotorcraft Avionics/Controls Architecture to support advanced controls and low-altitude guidance flight research

    Science.gov (United States)

    Jacobsen, Robert A.; Doane, Douglas H.; Eshow, Michelle M.; Aiken, Edwin W.; Hindson, William S.

    1992-01-01

    Salient design features of a new NASA/Army research rotorcraft--the Rotorcraft-Aircrew Systems Concepts Airborne Laboratory (RASCAL) are described. Using a UH-60A Black Hawk helicopter as a baseline vehicle, the RASCAL will be a flying laboratory capable of supporting the research requirements of major NASA and Army guidance, control, and display research programs. The paper describes the research facility requirements of these programs together with other critical constraints on the design of the research system. Research program schedules demand a phased development approach, wherein specific research capability milestones are met and flight research projects are flown throughout the complete development cycle of the RASCAL. This development approach is summarized, and selected features of the research system are described. The research system includes a real-time obstacle detection and avoidance system which will generate low-altitude guidance commands to the pilot on a wide field-of-view, color helmet-mounted display and a full-authority, programmable, fault-tolerant/fail-safe, fly-by-wire flight control system.

  1. A low-altitude mountain range as an important refugium for two narrow endemics in the Southwest Australian Floristic Region biodiversity hotspot

    NARCIS (Netherlands)

    Keppel, Gunnar; Robinson, Todd P.; Wardell-Johnson, Grant W.; Yates, Colin J.; Niel, Van Kimberly P.; Byrne, Margaret; Schut, Tom

    2016-01-01

    Background and Aims Low-altitude mountains constitute important centres of diversity in landscapes with little topographic variation, such as the Southwest Australian Floristic Region (SWAFR). They also provide unique climatic and edaphic conditions that may allow them to function as refugia. We

  2. Adoption of an unmanned helicopter for low-altitude remote sensing to estimate yield and total biomass of a rice crop

    Science.gov (United States)

    A radio-controlled unmanned helicopter-based LARS (Low-Altitude Remote Sensing) platform was used to acquire quality images of high spatial and temporal resolution, in order to estimate yield and total biomass of a rice crop (Oriza Sativa, L.). Fifteen rice field plots with five N-treatments (0, 33,...

  3. Model and Adaptive Operations of an Adaptive Component

    Science.gov (United States)

    Wei, Le; Zhao, Qiuyun; Shu, Hongping

    In order to keep up with the dynamical and open internet environment and in terms of component, an adaptive component model which is based on event mechanism and policy binding is proposed. Components of the model can sense external changes and give the explicit description of the external environment. According to preset policy, component also can take adaptive operations such as adding, deleting, replacing and updating when necessary, and adjust the behavior and structure of the internetware to provide better services.

  4. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though...

  5. Component-based event composition modeling for CPS

    Science.gov (United States)

    Yin, Zhonghai; Chu, Yanan

    2017-06-01

    In order to combine event-drive model with component-based architecture design, this paper proposes a component-based event composition model to realize CPS’s event processing. Firstly, the formal representations of component and attribute-oriented event are defined. Every component is consisted of subcomponents and the corresponding event sets. The attribute “type” is added to attribute-oriented event definition so as to describe the responsiveness to the component. Secondly, component-based event composition model is constructed. Concept lattice-based event algebra system is built to describe the relations between events, and the rules for drawing Hasse diagram are discussed. Thirdly, as there are redundancies among composite events, two simplification methods are proposed. Finally, the communication-based train control system is simulated to verify the event composition model. Results show that the event composition model we have constructed can be applied to express composite events correctly and effectively.

  6. Ecological Risk Assessment Framework for Low-Altitude Overflights by Fixed-Wing and Rotary-Wing Military Aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Efroymson, R.A.

    2001-01-12

    This is a companion report to the risk assessment framework proposed by Suter et al. (1998): ''A Framework for Assessment of Risks of Military Training and Testing to Natural Resources,'' hereafter referred to as the ''generic framework.'' The generic framework is an ecological risk assessment methodology for use in environmental assessments on Department of Defense (DoD) installations. In the generic framework, the ecological risk assessment framework of the US Environmental Protection Agency (EPA 1998) is modified for use in the context of (1) multiple and diverse stressors and activities at a military installation and (2) risks resulting from causal chains, e.g., effects on habitat that indirectly impact wildlife. Both modifications are important if the EPA framework is to be used on military installations. In order for the generic risk assessment framework to be useful to DoD environmental staff and contractors, the framework must be applied to specific training and testing activities. Three activity-specific ecological risk assessment frameworks have been written (1) to aid environmental staff in conducting risk assessments that involve these activities and (2) to guide staff in the development of analogous frameworks for other DoD activities. The three activities are: (1) low-altitude overflights by fixed-wing and rotary-wing aircraft (this volume), (2) firing at targets on land, and (3) ocean explosions. The activities were selected as priority training and testing activities by the advisory committee for this project.

  7. Nitrogen component in nonpoint source pollution models

    Science.gov (United States)

    Pollutants entering a water body can be very destructive to the health of that system. Best Management Practices (BMPs) and/or conservation practices are used to reduce these pollutants, but understanding the most effective practices is very difficult. Watershed models are an effective tool to aid...

  8. Modeling thermally active building components using space mapping

    DEFF Research Database (Denmark)

    Pedersen, Frank; Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    simplified models of the components do not always provide useful solutions, since they are not always able to reproduce the correct thermal behavior. The space mapping technique transforms a simplified, but computationally inexpensive model, in order to align it with a detailed model or measurements....... This paper describes the principle of the space mapping technique, and introduces a simple space mapping technique. The technique is applied to a lumped parameter model of a thermo active component, which provides a model of the thermal performance of the component as a function of two design parameters...

  9. Model-integrating software components engineering flexible software systems

    CERN Document Server

    Derakhshanmanesh, Mahdi

    2015-01-01

    In his study, Mahdi Derakhshanmanesh builds on the state of the art in modeling by proposing to integrate models into running software on the component-level without translating them to code. Such so-called model-integrating software exploits all advantages of models: models implicitly support a good separation of concerns, they are self-documenting and thus improve understandability and maintainability and in contrast to model-driven approaches there is no synchronization problem anymore between the models and the code generated from them. Using model-integrating components, software will be

  10. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    , communication and constraints, using computational blocks and aggregates for both discrete and continuous behaviour, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite...... to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set...... of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behaviour, and the transformation of the software system into the S-functions. The general aim of this work is the improvement of multi-disciplinary development of embedded systems with the focus on the relation...

  11. Current Flow and Pair Creation at Low Altitude in Rotation-Powered Pulsars' Force-Free Magnetospheres: Space Charge Limited Flow

    Science.gov (United States)

    Timokhin, A. N.; Arons, J.

    2013-01-01

    We report the results of an investigation of particle acceleration and electron-positron plasma generation at low altitude in the polar magnetic flux tubes of rotation-powered pulsars, when the stellar surface is free to emit whatever charges and currents are demanded by the force-free magnetosphere. We apply a new 1D hybrid plasma simulation code to the dynamical problem, using Particle-in-Cell methods for the dynamics of the charged particles, including a determination of the collective electrostatic fluctuations in the plasma, combined with a Monte Carlo treatment of the high-energy gamma-rays that mediate the formation of the electron-positron pairs.We assume the electric current flowing through the pair creation zone is fixed by the much higher inductance magnetosphere, and adopt the results of force-free magnetosphere models to provide the currents which must be carried by the accelerator. The models are spatially one dimensional, and designed to explore the physics, although of practical relevance to young, high-voltage pulsars. We observe novel behaviour (a) When the current density j is less than the Goldreich-Julian value (0 current carrying beam is mild, with the full Goldreich-Julian charge density comprising the charge densities of the beam and a cloud of electrically trapped particles with the same sign of charge as the beam. The voltage drops are of the order of mc(sup 2)/e, and pair creation is absent. (b) When the current density exceeds the Goldreich-Julian value (j/j(sub GJ) > 1), the system develops high voltage drops (TV or greater), causing emission of curvature gamma-rays and intense bursts of pair creation. The bursts exhibit limit cycle behaviour, with characteristic time-scales somewhat longer than the relativistic fly-by time over distances comparable to the polar cap diameter (microseconds). (c) In return current regions, where j/j(sub GJ) currents and adjust the charge density and average electric field to force-free conditions. We also

  12. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  13. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    Science.gov (United States)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  14. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    International Nuclear Information System (INIS)

    Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V

    2013-01-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance

  15. Components in models of learning: Different operationalisations and relations between components

    Directory of Open Access Journals (Sweden)

    Mirkov Snežana

    2013-01-01

    Full Text Available This paper provides the presentation of different operationalisations of components in different models of learning. Special emphasis is on the empirical verifications of relations between components. Starting from the research of congruence between learning motives and strategies, underlying the general model of school learning that comprises different approaches to learning, we have analyzed the empirical verifications of factor structure of instruments containing the scales of motives and learning strategies corresponding to these motives. Considering the problems in the conceptualization of the achievement approach to learning, we have discussed the ways of operational sing the goal orientations and exploring their role in using learning strategies, especially within the model of the regulation of constructive learning processes. This model has served as the basis for researching learning styles that are the combination of a large number of components. Complex relations between the components point to the need for further investigation of the constructs involved in various models. We have discussed the findings and implications of the studies of relations between the components involved in different models, especially between learning motives/goals and learning strategies. We have analyzed the role of regulation in the learning process, whose elaboration, as indicated by empirical findings, can contribute to a more precise operationalisation of certain learning components. [Projekat Ministarstva nauke Republike Srbije, br. 47008: Unapređivanje kvaliteta i dostupnosti obrazovanja u procesima modernizacije Srbije i br. 179034: Od podsticanja inicijative, saradnje i stvaralaštva u obrazovanju do novih uloga i identiteta u društvu

  16. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  17. Adding heat to the live-high train-low altitude model: a practical insight from professional football

    Science.gov (United States)

    Buchheit, M; Racinais, S; Bilsborough, J; Hocking, J; Mendez-Villanueva, A; Bourdon, P C; Voss, S; Livingston, S; Christian, R; Périard, J; Cordy, J; Coutts, A J

    2013-01-01

    Objectives To examine with a parallel group study design the performance and physiological responses to a 14-day off-season ‘live high-train low in the heat’ training camp in elite football players. Methods Seventeen professional Australian Rules Football players participated in outdoor football-specific skills (32±1°C, 11.5 h) and indoor strength (23±1°C, 9.3 h) sessions and slept (12 nights) and cycled indoors (4.3 h) in either normal air (NORM, n=8) or normobaric hypoxia (14±1 h/day, FiO2 15.2–14.3%, corresponding to a simulated altitude of 2500–3000 m, hypoxic (HYP), n=9). They completed the Yo-Yo Intermittent Recovery level 2 (Yo-YoIR2) in temperate conditions (23±1°C, normal air) precamp (Pre) and postcamp (Post). Plasma volume (PV) and haemoglobin mass (Hbmass) were measured at similar times and 4 weeks postcamp (4WPost). Sweat sodium concentration ((Na+)sweat) was measured Pre and Post during a heat-response test (44°C). Results Both groups showed very large improvements in Yo-YoIR2 at Post (+44%; 90% CL 38, 50), with no between-group differences in the changes (−1%; −9, 9). Postcamp, large changes in PV (+5.6%; −1.8, 5.6) and (Na+)sweat (−29%; −37, −19) were observed in both groups, while Hbmass only moderately increased in HYP (+2.6%; 0.5, 4.5). At 4WPost, there was a likely slightly greater increase in Hbmass (+4.6%; 0.0, 9.3) and PV (+6%; −5, 18, unclear) in HYP than in NORM. Conclusions The combination of heat and hypoxic exposure during sleep/training might offer a promising ‘conditioning cocktail’ in team sports. PMID:24282209

  18. Robustness of Component Models in Energy System Simulators

    DEFF Research Database (Denmark)

    Elmegaard, Brian

    2003-01-01

    ). Others have to do with the interaction between models of the nature of the substances in an energy system (e.g., fuels, air, flue gas), models of the components in a system (e.g., heat exchangers, turbines, pumps), and the solver for the system of equations. This paper proposes that the interaction...... evaluated where it is defined. Outside this region an algorithm is introduced, so the model iterates back to the feasible region. It is shown how this can be done for four different model of energy system component models: turbine constant, gasifier, heat exchanger effectiveness, and heat exchanger heat......During the development of the component-based energy system simulator DNA (Dynamic Network Analysis), several obstacles to easy use of the program have been observed. Some of these have to do with the nature of the program being based on a modelling language, not a graphical user interface (GUI...

  19. Investigating the Surface and Subsurface in Karstic Regions – Terrestrial Laser Scanning versus Low-Altitude Airborne Imaging and the Combination with Geophysical Prospecting

    Directory of Open Access Journals (Sweden)

    Nora Tilly

    2017-08-01

    Full Text Available Combining measurements of the surface and subsurface is a promising approach to understand the origin and current changes of karstic forms since subterraneous processes are often the initial driving force. A karst depression in south-west Germany was investigated in a comprehensive campaign with remote sensing and geophysical prospecting. This contribution has two objectives: firstly, comparing terrestrial laser scanning (TLS and low-altitude airborne imaging from an unmanned aerial vehicle (UAV regarding their performance in capturing the surface. Secondly, establishing a suitable way of combining this 3D surface data with data from the subsurface, derived by geophysical prospecting. Both remote sensing approaches performed satisfying and the established digital elevation models (DEMs differ only slightly. These minor discrepancies result essentially from the different viewing geometries and post-processing concepts, for example whether the vegetation was removed or not. Validation analyses against high-accurate DGPS-derived point data sets revealed slightly better results for the DEMTLS with a mean absolute difference of 0.03 m to 0.05 m and a standard deviation of 0.03 m to 0.07 m (DEMUAV: mean absolute difference: 0.11 m to 0.13 m; standard deviation: 0.09 m to 0.11 m. The 3D surface data and 2D image of the vertical cross section through the subsurface along a geophysical profile were combined in block diagrams. The data sets fit very well and give a first impression of the connection between surface and subsurface structures. Since capturing the subsurface with this method is limited to 2D and the data acquisition is quite time consuming, further investigations are necessary for reliable statements about subterraneous structures, how these may induce surface changes, and the origin of this karst depression. Moreover, geophysical prospecting can only produce a suspected image of the subsurface since the apparent resistivity is measured

  20. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    Science.gov (United States)

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  1. Integrating environmental component models. Development of a software framework

    NARCIS (Netherlands)

    Schmitz, O.

    2014-01-01

    Integrated models consist of interacting component models that represent various natural and social systems. They are important tools to improve our understanding of environmental systems, to evaluate cause–effect relationships of human–natural interactions, and to forecast the behaviour of

  2. Component and system simulation models for High Flux Isotope Reactor

    International Nuclear Information System (INIS)

    Sozer, A.

    1989-08-01

    Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs

  3. Modeling microstructural evolution of multiple texture components during recrystallization

    DEFF Research Database (Denmark)

    Vandermeer, R.A.; Juul Jensen, D.

    1994-01-01

    Models were formulated in an effort to characterize recrystallization in materials with multiple texture components. The models are based on a microstructural path methodology (MPM). Experimentally the microstructural evolution of conmmercial aluminum during recrystallization was characterized...... using stereological point and lineal measurements of microstructural properties in combination with EBSP analysis for orientation determinations. The potential of the models to describe the observed recrystallization behavior of heavily cold-rolled commercial aluminum was demonstrated. A successful MPM...

  4. Investigation of the Crust of the Pannonian Basin, Hungary Using Low-Altitude CHAMP Horizontal Gradient Magnetic Anomalies

    Science.gov (United States)

    Taylor, Patrick T.; Kis, Karoly I.; Puszta, Sandor; Wittmann, Geza; Kim, Hyung Rae; Toronyi, B.

    2011-01-01

    The Pannonian Basin is a deep intra-continental basin that formed as part of the Alpine orogeny. It is some 600 by 500 km in area and centered on Hungary. This area was chosen since it has one of the thinnest continental crusts in Europe and is the region of complex tectonic structures. In order to study the nature of the crustal basement we used the long-wavelength magnetic anomalies acquired by the CHAMP satellite. The SWARM constellation, scheduled to be launched next year, will have two lower altitude satellites flying abreast, with a separation of between ca. 150 to 200 km. to record the horizontal magnetic gradient. Since the CHAMP satellite has been in orbit for eight years and has obtained an extensive range of data, both vertically and horizontally there is a large enough data base to compute the horizontal magnetic gradients over the Pannonian Basin region using these many CHAMP orbits. We recomputed a satellite magnetic anomaly map, using the spherical-cap method of Haines (1985), the technique of Alsdorf et al. (1994) and from spherical harmonic coefficients of MF6 (Maus et aI., 2008) employing the latest and lowest altitude CHAMP data. We then computed the horizontal magnetic anomaly gradients (Kis and Puszta, 2006) in order to determine how these component data will improve our interpretation and to preview what the SW ARM mission will reveal with reference to the horizontal gradient anomalies. The gradient amplitude of an 1000 km northeast-southwest profile through our horizontal component anomaly map varied from 0 to 0.025 nT/km with twin positive anomalies (0.025 and 0.023 nT/km) separated by a sharp anomaly negative at o nT/km. Horizontal gradient indicate major magnetization boundaries in the crust (Dole and Jordan, 1978 and Cordell and Grauch, 1985). Our gradient anomaly was modeled with a twodimensional body and the anomaly, of some 200 km, correlates with a 200 km area of crustal thinning in the southwestern Pannonian Basin.

  5. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  6. Validating Timed Models of Deployment Components with Parametric Concurrency

    Science.gov (United States)

    Broch Johnsen, Einar; Owe, Olaf; Schlatte, Rudolf; Tapia Tarifa, Silvia Lizeth

    Many software systems today are designed without assuming a fixed underlying architecture, and may be adapted for sequential, multicore, or distributed deployment. Examples of such systems are found in, e.g., software product lines, service-oriented computing, information systems, embedded systems, operating systems, and telephony. Models of such systems need to capture and range over relevant deployment scenarios, so it is interesting to lift aspects of low-level deployment concerns to the abstraction level of the modeling language. This paper proposes an abstract model of deployment components for concurrent objects, extending the Creol modeling language. The deployment components are parametric in the amount of concurrency they provide; i.e., they vary in processing resources. We give a formal semantics of deployment components and characterize equivalence between deployment components which differ in concurrent resources in terms of test suites. Our semantics is executable on Maude, which allows simulations and test suites to be applied to a deployment component with different concurrent resources.

  7. Models for describing the thermal characteristics of building components

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik

    2008-01-01

    Outdoor testing of buildings and building components under real weather conditions provides useful information about their dynamic performance. Such knowledge is needed to properly characterize the heat transfer dynamics and provides useful information for implementing energy saving strategies...... of these approaches may therefore be very useful for selecting a suitable approach for each particular case. This paper presents an overview of models that can be applied for modelling the thermal characteristics of buildings and building components using data from outdoor testing. The choice of approach depends...

  8. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  9. Overfitting Bayesian Mixture Models with an Unknown Number of Components.

    Directory of Open Access Journals (Sweden)

    Zoé van Havre

    Full Text Available This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

  10. Overfitting Bayesian Mixture Models with an Unknown Number of Components.

    Science.gov (United States)

    van Havre, Zoé; White, Nicole; Rousseau, Judith; Mengersen, Kerrie

    2015-01-01

    This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

  11. Effects of 12-Week Endurance Training at Natural Low Altitude on the Blood Redox Homeostasis of Professional Adolescent Athletes: A Quasi-Experimental Field Trial

    Directory of Open Access Journals (Sweden)

    Tomas K. Tong

    2016-01-01

    Full Text Available This field study investigated the influences of exposure to natural low altitude on endurance training-induced alterations of redox homeostasis in professional adolescent runners undergoing 12-week off-season conditioning program at an altitude of 1700 m (Alt, by comparison with that of their counterparts completing the program at sea-level (SL. For age-, gender-, and Tanner-stage-matched comparison, 26 runners (n=13 in each group were selected and studied. Following the conditioning program, unaltered serum levels of thiobarbituric acid reactive substances (TBARS, total antioxidant capacity (T-AOC, and superoxide dismutase accompanied with an increase in oxidized glutathione (GSSG and decreases of xanthine oxidase, reduced glutathione (GSH, and GSH/GSSG ratio were observed in both Alt and SL groups. Serum glutathione peroxidase and catalase did not change in SL, whereas these enzymes, respectively, decreased and increased in Alt. Uric acid (UA decreased in SL and increased in Alt. Moreover, the decreases in GSH and GSH/GSSG ratio in Alt were relatively lower compared to those in SL. Further, significant interindividual correlations were found between changes in catalase and TBARS, as well as between UA and T-AOC. These findings suggest that long-term training at natural low altitude is unlikely to cause retained oxidative stress in professional adolescent runners.

  12. Effects of 12-Week Endurance Training at Natural Low Altitude on the Blood Redox Homeostasis of Professional Adolescent Athletes: A Quasi-Experimental Field Trial.

    Science.gov (United States)

    Tong, Tomas K; Kong, Zhaowei; Lin, Hua; He, Yeheng; Lippi, Giuseppe; Shi, Qingde; Zhang, Haifeng; Nie, Jinlei

    2016-01-01

    This field study investigated the influences of exposure to natural low altitude on endurance training-induced alterations of redox homeostasis in professional adolescent runners undergoing 12-week off-season conditioning program at an altitude of 1700 m (Alt), by comparison with that of their counterparts completing the program at sea-level (SL). For age-, gender-, and Tanner-stage-matched comparison, 26 runners (n = 13 in each group) were selected and studied. Following the conditioning program, unaltered serum levels of thiobarbituric acid reactive substances (TBARS), total antioxidant capacity (T-AOC), and superoxide dismutase accompanied with an increase in oxidized glutathione (GSSG) and decreases of xanthine oxidase, reduced glutathione (GSH), and GSH/GSSG ratio were observed in both Alt and SL groups. Serum glutathione peroxidase and catalase did not change in SL, whereas these enzymes, respectively, decreased and increased in Alt. Uric acid (UA) decreased in SL and increased in Alt. Moreover, the decreases in GSH and GSH/GSSG ratio in Alt were relatively lower compared to those in SL. Further, significant interindividual correlations were found between changes in catalase and TBARS, as well as between UA and T-AOC. These findings suggest that long-term training at natural low altitude is unlikely to cause retained oxidative stress in professional adolescent runners.

  13. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  14. Virtual Models Linked with Physical Components in Construction

    DEFF Research Database (Denmark)

    Sørensen, Kristian Birch

    components in the construction process and thereby improving the information handling. The present PhD project has examined the potential of establishing such a digital link between virtual models and physical components in construction. This is done by integrating knowledge of civil engineering, software......) project progress management, and 3) in operation and maintenance. Experiments and implementations in real life projects showed that mobile technology and passive RFID technology delineate an efficient and practically implementable ways to establish the digital links in construction and are ready for use...... virtual models that thoroughly mirror the performance of the final facility and its construction process. However, the potential of the virtual models in construction has not yet been fully utilised. One way to take more advantage of the virtual models is by digitally linking them with the physical...

  15. Efficient speaker verification using Gaussian mixture model component clustering.

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.

    2012-04-01

    In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we

  16. Modeling of magnetic components for power electronic converters

    Science.gov (United States)

    Hranov, Tsveti; Hinov, Nikolay

    2017-12-01

    The paper presents the modelling of magnetic components, used in the power electronic devices. Non-linear inductor and transformer are presented. During the design stage are taken into account that the converters are operated with non-sinusoidal currents and voltages. The models are realized in the MATLAB environment and their verification is done using computer simulations. The advantages of these models against the existing models are that relations between the parameters are formalized and this way the computational procedure is significantly faster. This is important in the cases when the quasi-steady-state regime in devices comes significantly slower and the investigations are requiring long simulation times.

  17. Do Knowledge-Component Models Need to Incorporate Representational Competencies?

    Science.gov (United States)

    Rau, Martina Angela

    2017-01-01

    Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…

  18. Towards integrated model building with semantically annotated components

    NARCIS (Netherlands)

    Schmitz, O.; Karssenberg, D.J.; Kok, J.L. de

    2012-01-01

    Integrated models are valuable tools for research and decision support as they allow a comprehensive analysis of environmental systems. Component–based software frameworks aid in their development by a more straightforward construction and coupling of generic components. Formal descriptions of

  19. Hybrid time/frequency domain modeling of nonlinear components

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...

  20. A channel-based coordination model for component composition

    NARCIS (Netherlands)

    F. Arbab (Farhad)

    2002-01-01

    textabstractIn this paper, we present $P epsilon omega$, a paradigm for composition of software components based on the notion of mobile channels. $P repsilon omega$ is a channel-based exogenous coordination model wherein complex coordinators, called {em connectors are compositionally built out of

  1. A New Software Quality Model for Evaluating COTS Components

    OpenAIRE

    Adnan Rawashdeh; Bassem Matalkah

    2006-01-01

    Studies show that COTS-based (Commercial off the shelf) systems that are being built recently are exceeding 40% of the total developed software systems. Therefore, a model that ensures quality characteristics of such systems becomes a necessity. Among the most critical processes in COTS-based systems are the evaluation and selection of the COTS components. There are several existing quality models used to evaluate software systems in general; however, none of them is dedicated to COTS-based s...

  2. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  3. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    AlAmmouri, Ahmad

    2016-07-26

    Stochastic geometry (SG) has been widely accepted as a fundamental tool for modeling and analyzing cellular networks. However, the fading models used with SG analysis are mainly confined to the simplistic Rayleigh fading, which is extended to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks with generalized two-ray (GTR) fading channel. The GTR fading explicitly accounts for two DSCs in addition to the diffuse components and offers high flexibility to capture diverse fading channels that appear in realistic outdoor/indoor wireless communication scenarios. It also encompasses the famous Rayleigh and Rician fading as special cases. To this end, the prominent effect of DSCs is highlighted in terms of average spectral efficiency. © 2016 IEEE.

  4. Cognitive components underpinning the development of model-based learning.

    Science.gov (United States)

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Four-component united-atom model of bitumen

    DEFF Research Database (Denmark)

    Hansen, Jesper Schmidt; Lemarchand, Claire; Nielsen, Erik

    2013-01-01

    We propose a four-component united-atom molecular model of bitumen. The model includes realistic chemical constituents and introduces a coarse graining level that suppresses the highest frequency modes. Molecular dynamics simulations of the model are carried out using graphic-processor-units based...... software in time spans in order of microseconds, which enables the study of slow relaxation processes characterizing bitumen. This paper also presents results of the model dynamics as expressed through the mean-square displacement, the stress autocorrelation function, and rotational relaxation...... the stress autocorrelation function, the shear viscosity and shear modulus are evaluated, showing a viscous response at frequencies below 100 MHz. The model predictions of viscosity and diffusivities are compared to experimental data, giving reasonable agreement. The model shows that the asphaltene, resin...

  6. Formal Model-Driven Engineering: Generating Data and Behavioural Components

    Directory of Open Access Journals (Sweden)

    Chen-Wei Wang

    2012-12-01

    Full Text Available Model-driven engineering is the automatic production of software artefacts from abstract models of structure and functionality. By targeting a specific class of system, it is possible to automate aspects of the development process, using model transformations and code generators that encode domain knowledge and implementation strategies. Using this approach, questions of correctness for a complex, software system may be answered through analysis of abstract models of lower complexity, under the assumption that the transformations and generators employed are themselves correct. This paper shows how formal techniques can be used to establish the correctness of model transformations used in the generation of software components from precise object models. The source language is based upon existing, formal techniques; the target language is the widely-used SQL notation for database programming. Correctness is established by giving comparable, relational semantics to both languages, and checking that the transformations are semantics-preserving.

  7. A minimal model for two-component dark matter

    International Nuclear Information System (INIS)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z 2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  8. Mathematical and computer modeling of component surface shaping

    Science.gov (United States)

    Lyashkov, A.

    2016-04-01

    The process of shaping technical surfaces is an interaction of a tool (a shape element) and a component (a formable element or a workpiece) in their relative movements. It was established that the main objects of formation are: 1) a discriminant of a surfaces family, formed by the movement of the shape element relatively the workpiece; 2) an enveloping model of the real component surface obtained after machining, including transition curves and undercut lines; 3) The model of cut-off layers obtained in the process of shaping. When modeling shaping objects there are a lot of insufficiently solved or unsolved issues that make up a single scientific problem - a problem of qualitative shaping of the surface of the tool and then the component surface produced by this tool. The improvement of known metal-cutting tools, intensive development of systems of their computer-aided design requires further improvement of the methods of shaping the mating surfaces. In this regard, an important role is played by the study of the processes of shaping of technical surfaces with the use of the positive aspects of analytical and numerical mathematical methods and techniques associated with the use of mathematical and computer modeling. The author of the paper has posed and has solved the problem of development of mathematical, geometric and algorithmic support of computer-aided design of cutting tools based on computer simulation of the shaping process of surfaces.

  9. Evaluating fugacity models for trace components in landfill gas

    International Nuclear Information System (INIS)

    Shafi, Sophie; Sweetman, Andrew; Hough, Rupert L.; Smith, Richard; Rosevear, Alan; Pollard, Simon J.T.

    2006-01-01

    A fugacity approach was evaluated to reconcile loadings of vinyl chloride (chloroethene), benzene, 1,3-butadiene and trichloroethylene in waste with concentrations observed in landfill gas monitoring studies. An evaluative environment derived from fictitious but realistic properties such as volume, composition, and temperature, constructed with data from the Brogborough landfill (UK) test cells was used to test a fugacity approach to generating the source term for use in landfill gas risk assessment models (e.g. GasSim). SOILVE, a dynamic Level II model adapted here for landfills, showed greatest utility for benzene and 1,3-butadiene, modelled under anaerobic conditions over a 10 year simulation. Modelled concentrations of these components (95 300 μg m -3 ; 43 μg m -3 ) fell within measured ranges observed in gas from landfills (24 300-180 000 μg m -3 ; 20-70 μg m -3 ). This study highlights the need (i) for representative and time-referenced biotransformation data; (ii) to evaluate the partitioning characteristics of organic matter within waste systems and (iii) for a better understanding of the role that gas extraction rate (flux) plays in producing trace component concentrations in landfill gas. - Fugacity for trace component in landfill gas

  10. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  11. Scale modeling flow-induced vibrations of reactor components

    International Nuclear Information System (INIS)

    Mulcahy, T.M.

    1982-06-01

    Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response

  12. A Component-based Programming Model for Composite, Distributed Applications

    Science.gov (United States)

    Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.

  13. Modeling Hydraulic Components for Automated FMEA of a Braking System

    Science.gov (United States)

    2014-12-23

    causes the diminishing of the wheel brake pressure if the brake pedal is released. When operated under the Anti - lock - braking system (ABS), the valves...Modeling Hydraulic Components for Automated FMEA of a Braking System Peter Struss, Alessandro Fraracci Tech. Univ. of Munich, 85748 Garching...the hydraulic part of a vehicle braking system . We describe the FMEA task and the application problem and outline the foundations for automating the

  14. Orbital component extraction by time-variant sinusoidal modeling.

    Science.gov (United States)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-04-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on the (Fast) Fourier Transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic makes it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. Here, we circumvent this drawback by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach has been proven useful to characterize audio signals (music and speech), which are non-stationary in nature (Zivanovic and Schoukens, 2010, 2012). Paleoclimate proxy signals and audio signals have in nature similar dynamics; the only difference is the frequency relationship between the different components. A harmonic frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, the latter difference is irrelevant for the problem at hand. Using a sliding window approach, the model captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretation, whereas the latter are estimated by means of linear least-squares. As an output, the model provides the orbital component waveform, either in the depth or time domain. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns can be used to reconstruct changes in accumulation rate, whereas amplitude modulation can be used to reconstruct e.g. eccentricity-modulated precession. The time-variant sinusoidal model

  15. Are small-scale field-aligned currents and magneto sheath-like particle precipitation signatures of the same low-altitude cusp?

    DEFF Research Database (Denmark)

    Watermann, J.; Stauning, P.; Luhr, H.

    2009-01-01

    We examined some 75 observations from the low-altitude Earth orbiting DMSP, Orsted and CHAMP satellites which were taken in the region of the nominal cusp. Our objective was to determine whether the actually observed cusp locations as inferred from magnetosheath-like particle precipitation...... ("particle cusp") and intense small-scale magnetic field variations ("current cusp"), respectively, were identical and were consistent with the statistically expected latitude of the cusp derived from a huge number of charged particle spectrograms ("statistical cusp"). The geocentric coordinates...... of the satellites were converted into AACGM coordinates, and the geomagnetic latitude of the cusp boundaries (as indicated by precipitating particles and small-scale field-aligned currents) set in relation to the IMF-B-z dependent latitude of the equatorward boundary of the statistical cusp. We find...

  16. Modelling raster-based monthly water balance components for Europe

    Energy Technology Data Exchange (ETDEWEB)

    Ulmen, C.

    2000-11-01

    The terrestrial runoff component is a comparatively small but sensitive and thus significant quantity in the global energy and water cycle at the interface between landmass and atmosphere. As opposed to soil moisture and evapotranspiration which critically determine water vapour fluxes and thus water and energy transport, it can be measured as an integrated quantity over a large area, i.e. the river basin. This peculiarity makes terrestrial runoff ideally suited for the calibration, verification and validation of general circulation models (GCMs). Gauging stations are not homogeneously distributed in space. Moreover, time series are not necessarily continuously measured nor do they in general have overlapping time periods. To overcome this problems with regard to regular grid spacing used in GCMs, different methods can be applied to transform irregular data to regular so called gridded runoff fields. The present work aims to directly compute the gridded components of the monthly water balance (including gridded runoff fields) for Europe by application of the well-established raster-based macro-scale water balance model WABIMON used at the Federal Institute of Hydrology, Germany. Model calibration and validation is performed by separated examination of 29 representative European catchments. Results indicate a general applicability of the model delivering reliable overall patterns and integrated quantities on a monthly basis. For time steps less then too weeks further research and structural improvements of the model are suggested. (orig.)

  17. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  18. Modeling of Shock Propagation and Attenuation in Viscoelastic Components

    Directory of Open Access Journals (Sweden)

    R. Rusovici

    2001-01-01

    Full Text Available Protection from the potentially damaging effects of shock loading is a common design requirement for diverse mechanical structures ranging from shock accelerometers to spacecraft. High damping viscoelastic materials are employed in the design of geometrically complex, impact-absorbent components. Since shock transients are characterized by a broad frequency spectrum, it is imperative to properly model frequency dependence of material behavior over a wide frequency range. The Anelastic Displacement Fields (ADF method is employed herein to model frequency-dependence within a time-domain finite element framework. Axisymmetric, ADF finite elements are developed and then used to model shock propagation and absorption through viscoelastic structures. The model predictions are verified against longitudinal wave propagation experimental data and theory.

  19. Three-Component Forward Modeling for Transient Electromagnetic Method

    Directory of Open Access Journals (Sweden)

    Bin Xiong

    2010-01-01

    Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.

  20. Evaluating fugacity models for trace components in landfill gas.

    Science.gov (United States)

    Shafi, Sophie; Sweetman, Andrew; Hough, Rupert L; Smith, Richard; Rosevear, Alan; Pollard, Simon J T

    2006-12-01

    A fugacity approach was evaluated to reconcile loadings of vinyl chloride (chloroethene), benzene, 1,3-butadiene and trichloroethylene in waste with concentrations observed in landfill gas monitoring studies. An evaluative environment derived from fictitious but realistic properties such as volume, composition, and temperature, constructed with data from the Brogborough landfill (UK) test cells was used to test a fugacity approach to generating the source term for use in landfill gas risk assessment models (e.g. GasSim). SOILVE, a dynamic Level II model adapted here for landfills, showed greatest utility for benzene and 1,3-butadiene, modelled under anaerobic conditions over a 10 year simulation. Modelled concentrations of these components (95,300 microg m(-3); 43 microg m(-3)) fell within measured ranges observed in gas from landfills (24,300-180,000 microg m(-3); 20-70 microg m(-3)). This study highlights the need (i) for representative and time-referenced biotransformation data; (ii) to evaluate the partitioning characteristics of organic matter within waste systems and (iii) for a better understanding of the role that gas extraction rate (flux) plays in producing trace component concentrations in landfill gas.

  1. Collective and static properties of model two-component plasmas

    International Nuclear Information System (INIS)

    Arkhipov, Yu. V.; Askaruly, A.; Davletov, A. E.; Meirkanova, G. M.; Ballester, D.; Tkachenko, I. M.

    2007-01-01

    Classical MD data on the charge-charge dynamic structure factor of two-component plasmas (TCP) modeled in Phys. Rev. A 23, 2041 (1981) are analyzed using the sum rules and other exact relations. The convergent power moments of the imaginary part of the model system dielectric function are expressed in terms of its partial static structure factors, which are computed by the method of hypernetted chains using the Deutsch effective potential. High-frequency asymptotic behavior of the dielectric function is specified to include the effects of inverse bremsstrahlung. The agreement with the MD data is improved, and important statistical characteristics of the model TCP, such as the probability to find both electron and ion at one point, are determined

  2. Comparison of analytical eddy current models using principal components analysis

    Science.gov (United States)

    Contant, S.; Luloff, M.; Morelli, J.; Krause, T. W.

    2017-02-01

    Monitoring the gap between the pressure tube (PT) and the calandria tube (CT) in CANDU® fuel channels is essential, as contact between the two tubes can lead to delayed hydride cracking of the pressure tube. Multifrequency transmit-receive eddy current non-destructive evaluation is used to determine this gap, as this method has different depths of penetration and variable sensitivity to noise, unlike single frequency eddy current non-destructive evaluation. An Analytical model based on the Dodd and Deeds solutions, and a second model that accounts for normal and lossy self-inductances, and a non-coaxial pickup coil, are examined for representing the response of an eddy current transmit-receive probe when considering factors that affect the gap response, such as pressure tube wall thickness and pressure tube resistivity. The multifrequency model data was analyzed using principal components analysis (PCA), a statistical method used to reduce the data set into a data set of fewer variables. The results of the PCA of the analytical models were then compared to PCA performed on a previously obtained experimental data set. The models gave similar results under variable PT wall thickness conditions, but the non-coaxial coil model, which accounts for self-inductive losses, performed significantly better than the Dodd and Deeds model under variable resistivity conditions.

  3. Modelling safety of multistate systems with ageing components

    Science.gov (United States)

    Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna

    2016-06-01

    An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics of the consecutive "m out of n: F" is presented as well.

  4. No Change in Running Mechanics With Live High-Train Low Altitude Training in Elite Distance Runners.

    Science.gov (United States)

    Stickford, Abigail S L; Wilhite, Daniel P; Chapman, Robert F

    2017-01-01

    Investigations into ventilatory, metabolic, and hematological changes with altitude training have been completed; however, there is a lack of research exploring potential gait-kinematic changes after altitude training, despite a common complaint of athletes being a lack of leg "turnover" on return from altitude training. To determine if select kinematic variables changed in a group of elite distance runners after 4 wk of altitude training. Six elite male distance runners completed a 28-d altitude-training intervention in Flagstaff, AZ (2150 m), following a modified "live high-train low" model, wherein higherintensity runs were performed at lower altitudes (945-1150 m) and low-intensity sessions were completed at higher altitudes (1950-2850 m). Gait parameters were measured 2-9 d before departure to altitude and 1 to 2 d after returning to sea level at running speeds of 300-360 m/min. No differences were found in ground-contact time, swing time, or stride length or frequency after altitude training (P > .05). Running mechanics are not affected by chronic altitude training in elite distance runners. The data suggest that either chronic training at altitude truly has no effect on running mechanics or completing the live high-train low model of altitude training, where higher-velocity workouts are completed at lower elevations, mitigates any negative mechanical adaptations that may be associated with chronic training at slower speeds.

  5. A multi-component evaporation model for beam melting processes

    Science.gov (United States)

    Klassen, Alexander; Forster, Vera E.; Körner, Carolin

    2017-02-01

    In additive manufacturing using laser or electron beam melting technologies, evaporation losses and changes in chemical composition are known issues when processing alloys with volatile elements. In this paper, a recently described numerical model based on a two-dimensional free surface lattice Boltzmann method is further developed to incorporate the effects of multi-component evaporation. The model takes into account the local melt pool composition during heating and fusion of metal powder. For validation, the titanium alloy Ti-6Al-4V is melted by selective electron beam melting and analysed using mass loss measurements and high-resolution microprobe imaging. Numerically determined evaporation losses and spatial distributions of aluminium compare well with experimental data. Predictions of the melt pool formation in bulk samples provide insight into the competition between the loss of volatile alloying elements from the irradiated surface and their advective redistribution within the molten region.

  6. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  7. Flexible Multibody Systems Models Using Composite Materials Components

    International Nuclear Information System (INIS)

    Neto, Maria Augusta; Ambr'osio, Jorge A. C.; Leal, Rog'erio Pereira

    2004-01-01

    The use of a multibody methodology to describe the large motion of complex systems that experience structural deformations enables to represent the complete system motion, the relative kinematics between the components involved, the deformation of the structural members and the inertia coupling between the large rigid body motion and the system elastodynamics. In this work, the flexible multibody dynamics formulations of complex models are extended to include elastic components made of composite materials, which may be laminated and anisotropic. The deformation of any structural member must be elastic and linear, when described in a coordinate frame fixed to one or more material points of its domain, regardless of the complexity of its geometry. To achieve the proposed flexible multibody formulation, a finite element model for each flexible body is used. For the beam composite material elements, the sections properties are found using an asymptotic procedure that involves a two-dimensional finite element analysis of their cross-section. The equations of motion of the flexible multibody system are solved using an augmented Lagrangian formulation and the accelerations and velocities are integrated in time using a multi-step multi-order integration algorithm based on the Gear method

  8. Sildenafil increased exercise capacity during hypoxia at low altitudes and at Mount Everest base camp: a randomized, double-blind, placebo-controlled crossover trial.

    Science.gov (United States)

    Ghofrani, Hossein A; Reichenberger, Frank; Kohstall, Markus G; Mrosek, Eike H; Seeger, Timon; Olschewski, Horst; Seeger, Werner; Grimminger, Friedrich

    2004-08-03

    Alveolar hypoxia causes pulmonary hypertension and enhanced right ventricular afterload, which may impair exercise tolerance. The phosphodiesterase-5 inhibitor sildenafil has been reported to cause pulmonary vasodilatation. To investigate the effects of sildenafil on exercise capacity under conditions of hypoxic pulmonary hypertension. Randomized, double-blind, placebo-controlled crossover study. University Hospital Giessen, Giessen, Germany, and the base camp on Mount Everest. 14 healthy mountaineers and trekkers. Systolic pulmonary artery pressure, cardiac output, and peripheral arterial oxygen saturation at rest and during assessment of maximum exercise capacity on cycle ergometry 1) while breathing a hypoxic gas mixture with 10% fraction of inspired oxygen at low altitude (Giessen) and 2) at high altitude (the Mount Everest base camp). Oral sildenafil, 50 mg, or placebo. At low altitude, acute hypoxia reduced arterial oxygen saturation to 72.0% (95% CI, 66.5% to 77.5%) at rest and 60.8% (CI, 56.0% to 64.5%) at maximum exercise capacity. Systolic pulmonary artery pressure increased from 30.5 mm Hg (CI, 26.0 to 35.0 mm Hg) at rest to 42.9 mm Hg (CI, 35.6 to 53.5 mm Hg) during exercise in participants taking placebo. Sildenafil, 50 mg, significantly increased arterial oxygen saturation during exercise (P = 0.005) and reduced systolic pulmonary artery pressure at rest (P < 0.001) and during exercise (P = 0.031). Of note, sildenafil increased maximum workload (172.5 W [CI, 147.5 to 200.0 W]) vs. 130.6 W [CI, 108.8 to 150.0 W]); P < 0.001) and maximum cardiac output (P < 0.001) compared with placebo. At high altitude, sildenafil had no effect on arterial oxygen saturation at rest and during exercise compared with placebo. However, sildenafil reduced systolic pulmonary artery pressure at rest (P = 0.003) and during exercise (P = 0.021) and increased maximum workload (P = 0.002) and cardiac output (P = 0.015). At high altitude, sildenafil exacerbated existing headache

  9. CORRECCION DE ILUMINACION PARA IMAGENES AEREAS DE CULTIVOS TOMADAS A BAJA ALTITUD ILLUMINATION CORRECTION FOR AERIAL AGRICULTURE IMAGES TAKEN AT LOW ALTITUDE

    Directory of Open Access Journals (Sweden)

    Juan Camilo Mejía Ospina

    2007-12-01

    Full Text Available En este trabajo se propone un método para realizar correcciones de color de imágenes aéreas de cultivos tomadas a baja altitud, en diferentes fechas, horas y condiciones de nubosidad. El método no requiere que la escena posea superficies o datos concretos que sirvan de estándar para realizar la corrección sobre la iluminación presente. El está basado en la técnica de constancia de color llamada gamut mapping, a la cual se le han hecho algunas variaciones aprovechando las características propias de las imágenes aéreas. El método permite, para imágenes capturadas bajo diferentes condiciones de iluminación, su normalización de modo que sean comparables entre ellas. En particular se aplicó a plantaciones de banano, pero puede ser extendido a cualquier tipo de cultivo, siempre que se utilice fotografía aérea de baja altitud. Se probó y validó usando imágenes simuladas debido a las dificultades encontradas durante la etapa de adquisición de imágenes reales.A new method is presented in order to carry out the color corrections of aerial agriculture images taken at low altitude, in different dates, times and conditions of cloudiness. The method does not require scene targets or specific data in order to develop the correction of the actual illumination. It is based on the constancy of color technique called gamut mapping, which was adapted to take advantage of the own characteristics of the aerial images. For images captured under different conditions of illumination, the method allows its normalization for comparison purposes. It was applied to banana plantations, but it can be extended to any type of agriculture images, registered by aerial images at low altitude. It was proven and validated using synthetic images because of some difficulties found during the acquisition stage of real images.

  10. Ecological, Psychological, and Cognitive Components of Reading Difficulties: Testing the Component Model of Reading in Fourth Graders across 38 Countries

    Science.gov (United States)

    Chiu, Ming Ming; McBride-Chang, Catherine; Lin, Dan

    2012-01-01

    The authors tested the component model of reading (CMR) among 186,725 fourth grade students from 38 countries (45 regions) on five continents by analyzing the 2006 Progress in International Reading Literacy Study data using measures of ecological (country, family, school, teacher), psychological, and cognitive components. More than 91% of the…

  11. Modeling Organic Contaminant Desorption from Municipal Solid Waste Components

    Science.gov (United States)

    Knappe, D. R.; Wu, B.; Barlaz, M. A.

    2002-12-01

    Approximately 25% of the sites on the National Priority List (NPL) of Superfund are municipal landfills that accepted hazardous waste. Unlined landfills typically result in groundwater contamination, and priority pollutants such as alkylbenzenes are often present. To select cost-effective risk management alternatives, better information on factors controlling the fate of hydrophobic organic contaminants (HOCs) in landfills is required. The objectives of this study were (1) to investigate the effects of HOC aging time, anaerobic sorbent decomposition, and leachate composition on HOC desorption rates, and (2) to simulate HOC desorption rates from polymers and biopolymer composites with suitable diffusion models. Experiments were conducted with individual components of municipal solid waste (MSW) including polyvinyl chloride (PVC), high-density polyethylene (HDPE), newsprint, office paper, and model food and yard waste (rabbit food). Each of the biopolymer composites (office paper, newsprint, rabbit food) was tested in both fresh and anaerobically decomposed form. To determine the effects of aging on alkylbenzene desorption rates, batch desorption tests were performed after sorbents were exposed to toluene for 30 and 250 days in flame-sealed ampules. Desorption tests showed that alkylbenzene desorption rates varied greatly among MSW components (PVC slowest, fresh rabbit food and newsprint fastest). Furthermore, desorption rates decreased as aging time increased. A single-parameter polymer diffusion model successfully described PVC and HDPE desorption data, but it failed to simulate desorption rate data for biopolymer composites. For biopolymer composites, a three-parameter biphasic polymer diffusion model was employed, which successfully simulated both the initial rapid and the subsequent slow desorption of toluene. Toluene desorption rates from MSW mixtures were predicted for typical MSW compositions in the years 1960 and 1997. For the older MSW mixture, which had a

  12. Modelling safety of multistate systems with ageing components

    Energy Technology Data Exchange (ETDEWEB)

    Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna [Gdynia Maritime University, Department of Mathematics ul. Morska 81-87, Gdynia 81-225 Poland (Poland)

    2016-06-08

    An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics of the consecutive “m out of n: F” is presented as well.

  13. Modeling of Strain Effects in Multi-component Semiconductors

    Science.gov (United States)

    Arjmand, Mehrdad

    Strain affects the properties of crystalline material by changing the atomic symmetry. Controlling the strain in semiconductors helps to tune properties of material and design new material. For instance, strained semiconductor heterostructures have improved the efficiency of traditional solar cells remarkably. Another example of strain application is in electronic devices. Strained heterostructure nanowires provide a better control on electronic properties of gates used in transistors. Gate-all-around nanowires are promising candidates to power microprocessors in future. Strain is also used to make quantum dot structures from semiconductors. These quantum dots are used in quantum computing, diode lasers and sensors. Once the stored strain in a structure reaches a critical limit, it relaxes by triggering different phenomena in the structure. For instance, strain causes morphology change, plastic deformation, phase separation and intermixing, fracture, buckling, bulging and peeling. In order to use these strained structures for design purposes, it is critical to understand these different relaxation phenomena and be able to control them. Modeling provides a powerful framework to understand different relaxation mechanisms and provide guidance to control these strain induced phenomena. In this thesis, I have developed a continuum based model called "phase field" to study morphology change, plastic deformation and phase separation in multi-component semiconductors during growth and annealing processes. The advantage of phase field approach compared to some other modeling techniques is that it includes the effects of both thermodynamics and kinetics. Also, I have developed a continuum based elasto-plasticity model to study the effects of plastic relaxation in semiconductor nanowires. This model can particularly be useful for piezoelectric and surface stability analysis of nanowires.

  14. Effects of antioxidant vitamins on newborn and placental traits in gestations at high altitude: comparative study in high and low altitude native sheep.

    Science.gov (United States)

    Parraguez, Víctor H; Atlagich, Miljenko; Araneda, Oscar; García, Carlos; Muñoz, Andrés; De Los Reyes, Mónica; Urquieta, Bessie

    2011-01-01

    The present study evaluated the hypothesis that the effects of hypoxia on sheep pregnancies at high altitude (HA) are mediated by oxidative stress and that antioxidant vitamins may prevent these effects. Both HA native and newcomer ewes were maintained at an altitude of 3,589 m during mating and pregnancy. Control low altitude (LA) native ewes were maintained at sea level. Half of each group received daily oral supplements of vitamins C (500 mg) and E (350 IU) during mating and gestation. Near term, maternal plasma vitamin levels and oxidative stress biomarkers were measured. At delivery, lambs were weighed and measured, and placentas were recovered for macroscopic and microscopic evaluation. Vitamin concentrations in supplemented ewes were two- or threefold greater than in non-supplemented ewes. Plasma carbonyls and malondialdehyde in non-supplemented ewes were consistent with a state of oxidative stress, which was prevented by vitamin supplementation. Vitamin supplementation increased lamb birthweight and cotyledon number in both HA native and newcomer ewes, although placental weight and cotyledon surface were diminished. Placentas from vitamin-supplemented HA ewes were similar to those from ewes at sea level, making these placental traits (weight, number and diameter of cotyledons) similar to those from ewes at sea level. Vitamin supplementation had no effect on LA pregnancies. In conclusion, supplementation with vitamins C and E during pregnancy at HA prevents oxidative stress, improving pregnancy outcomes.

  15. Optimization of Component Based Software Engineering Model Using Neural Network

    OpenAIRE

    Gaurav Kumar; Pradeep Kumar Bhatia

    2014-01-01

    The goal of Component Based Software Engineering (CBSE) is to deliver high quality, more reliable and more maintainable software systems in a shorter time and within limited budget by reusing and combining existing quality components. A high quality system can be achieved by using quality components, framework and integration process that plays a significant role. So, techniques and methods used for quality assurance and assessment of a component based system is different from those of the tr...

  16. Stochastic Models of Defects in Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2013-01-01

    The drivetrain in a wind turbine nacelle typically consists of a variety of heavily loaded components, like the main shaft, bearings, gearbox and generator. The variations in environmental load challenge the performance of all the components of the drivetrain. Failure of each of these components ...

  17. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  18. Estimation models of variance components for farrowing interval in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante Neto

    2009-02-01

    Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos

  19. The Kilauea 1974 Flow: Quantitative Morphometry of Lava Flows using Low Altitude Aerial Image Data using a Kite-based Platform in the Field

    Science.gov (United States)

    Scheidt, S. P.; Whelley, P.; Hamilton, C.; Bleacher, J. E.; Garry, W. B.

    2015-12-01

    The December 31, 1974 lava flow from Kilauea Caldera, Hawaii within the Hawaii Volcanoes National Park was selected for field campaigns as a terrestrial analog for Mars in support of NASA Planetary Geology and Geophysics (PGG) research and the Remote, In Situ and Synchrotron Studies for Science and Exploration (RIS4E) node of the Solar System Exploration Research Virtual Institute (SSERVI) program). The lava flow was a rapidly emplaced unit that was strongly influenced by existing topography, which favored the formation of a tributary lava flow system. The unit includes a diverse range of surface textures (e.g., pāhoehoe, ´áā, and transitional lavas), and structural features (e.g., streamlined islands, pits, and interactions with older tumuli). However, these features are generally below the threshold of visibility within previously acquired airborne and spacecraft data. In this study, we have generated unique, high-resolution digital images using low-altitude Kite Aerial Photography (KAP) system during field campaigns in 2014 and 2015 (National Park Service permit #HAVO-2012-SCI-0025). The kite-based mapping platform (nadir-viewing) and a radio-controlled gimbal (allowing pointing) provided similar data as from an unmanned aerial vehicle (UAV), but with longer flight time, larger total data volumes per sortie, and fewer regulatory challenges and cost. Images acquired from KAP and UAVs are used to create orthomosaics and DEMs using Multi-View Stereo-Photogrammetry (MVSP) software. The 3-Dimensional point clouds are extremely dense, resulting in a grid resolution of < 2 cm. Airborne Light Detection and Ranging (LiDAR) / Terrestrial Laser Scanning (TLS) data have been collected for these areas and provide a basis of comparison or "ground truth" for the photogrammetric data. Our results show a good comparison with LiDAR/TLS data, each offering their own unique advantages and potential for data fusion.

  20. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs

    Science.gov (United States)

    Yahyanejad, Saeed; Rinner, Bernhard

    2015-06-01

    The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.

  1. Modeling injection molding of net-shape active ceramic components.

    Energy Technology Data Exchange (ETDEWEB)

    Baer, Tomas (Gram Inc.); Cote, Raymond O.; Grillet, Anne Mary; Yang, Pin; Hopkins, Matthew Morgan; Noble, David R.; Notz, Patrick K.; Rao, Rekha Ranjana; Halbleib, Laura L.; Castaneda, Jaime N.; Burns, George Robert; Mondy, Lisa Ann; Brooks, Carlton, F.

    2006-11-01

    To reduce costs and hazardous wastes associated with the production of lead-based active ceramic components, an injection molding process is being investigated to replace the current machining process. Here, lead zirconate titanate (PZT) ceramic particles are suspended in a thermoplastic resin and are injected into a mold and allowed to cool. The part is then bisque fired and sintered to complete the densification process. To help design this new process we use a finite element model to describe the injection molding of the ceramic paste. Flow solutions are obtained using a coupled, finite-element based, Newton-Raphson numerical method based on the GOMA/ARIA suite of Sandia flow solvers. The evolution of the free surface is solved with an advanced level set algorithm. This approach incorporates novel methods for representing surface tension and wetting forces that affect the evolution of the free surface. Thermal, rheological, and wetting properties of the PZT paste are measured for use as input to the model. The viscosity of the PZT is highly dependent both on temperature and shear rate. One challenge in modeling the injection process is coming up with appropriate constitutive equations that capture relevant phenomenology without being too computationally complex. For this reason we model the material as a Carreau fluid and a WLF temperature dependence. Two-dimensional (2D) modeling is performed to explore the effects of the shear in isothermal conditions. Results indicate that very low viscosity regions exist near walls and that these results look similar in terms of meniscus shape and fill times to a simple Newtonian constitutive equation at the shear-thinned viscosity for the paste. These results allow us to pick a representative viscosity to use in fully three-dimensional (3D) simulation, which because of numerical complexities are restricted to using a Newtonian constitutive equation. Further 2D modeling at nonisothermal conditions shows that the choice of

  2. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-17

    motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking...model does not have to be an electrical machine expert to scale the model. Similar features are delivered for the battery and inverter models. The

  3. Ecological, psychological, and cognitive components of reading difficulties: testing the component model of reading in fourth graders across 38 countries.

    Science.gov (United States)

    Chiu, Ming Ming; McBride-Chang, Catherine; Lin, Dan

    2012-01-01

    The authors tested the component model of reading (CMR) among 186,725 fourth grade students from 38 countries (45 regions) on five continents by analyzing the 2006 Progress in International Reading Literacy Study data using measures of ecological (country, family, school, teacher), psychological, and cognitive components. More than 91% of the differences in student difficulty occurred at the country (61%) and classroom (30%) levels (ecological), with less than 9% at the student level (cognitive and psychological). All three components were negatively associated with reading difficulties: cognitive (student's early literacy skills), ecological (family characteristics [socioeconomic status, number of books at home, and attitudes about reading], school characteristics [school climate and resources]), and psychological (students' attitudes about reading, reading self-concept, and being a girl). These results extend the CMR by demonstrating the importance of multiple levels of factors for reading deficits across diverse cultures.

  4. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S

    2015-02-25

    The previously suggested quasi-discrete model for heating and evaporation of complex multi-component hydrocarbon fuel droplets is described. The dependence of density, viscosity, heat capacity and thermal conductivity of liquid components on carbon numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model. This model is applied to the analysis of Diesel and gasoline fuel droplet heating and evaporation. The components with relatively close n are replaced by quasi-components with properties calculated as average properties of the a priori defined groups of actual components. Thus the analysis of the heating and evaporation of droplets consisting of many components is replaced with the analysis of the heating and evaporation of droplets consisting of relatively few quasi-components. It is demonstrated that for Diesel and gasoline fuel droplets the predictions of the model based on five quasi-components are almost indistinguishable from the predictions of the model based on twenty quasi-components for Diesel fuel droplets and are very close to the predictions of the model based on thirteen quasi-components for gasoline fuel droplets. It is recommended that in the cases of both Diesel and gasoline spray combustion modelling, the analysis of droplet heating and evaporation is based on as little as five quasi-components.

  5. Building Component Library: An Online Repository to Facilitate Building Energy Model Creation; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Fleming, K.; Long, N.; Swindler, A.

    2012-05-01

    This paper describes the Building Component Library (BCL), the U.S. Department of Energy's (DOE) online repository of building components that can be directly used to create energy models. This comprehensive, searchable library consists of components and measures as well as the metadata which describes them. The library is also designed to allow contributors to easily add new components, providing a continuously growing, standardized list of components for users to draw upon.

  6. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, T.J.; Long, K.S.; Sayre, J.A. [Sandia National Labs., Albuquerque, NM (United States); Hull, A.L. [Sandia National Labs., Livermore, CA (United States); Carey, D.A.; Sim, J.R.; Smith, M.G. [Allied-Signal Aerospace Co., Kansas City, MO (United States). Kansas City Div.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  7. Exploring component-based approaches in forest landscape modeling

    Science.gov (United States)

    H. S. He; D. R. Larsen; D. J. Mladenoff

    2002-01-01

    Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...

  8. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  9. Modeling dynamics of biological and chemical components of aquatic ecosystems

    International Nuclear Information System (INIS)

    Lassiter, R.R.

    1975-05-01

    To provide capability to model aquatic ecosystems or their subsystems as needed for particular research goals, a modeling strategy was developed. Submodels of several processes common to aquatic ecosystems were developed or adapted from previously existing ones. Included are submodels for photosynthesis as a function of light and depth, biological growth rates as a function of temperature, dynamic chemical equilibrium, feeding and growth, and various types of losses to biological populations. These submodels may be used as modules in the construction of models of subsystems or ecosystems. A preliminary model for the nitrogen cycle subsystem was developed using the modeling strategy and applicable submodels. (U.S.)

  10. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-16

    Technology: Permanent Magnet Brushless DC machine • Model: Self-generating torque-speed-efficiency map • Future improvements: Induction machine...Abrams) Diesel 150-1000 hp (Others) Alternator 24 Vdc Bi-directional 150 kW DC - DC Converter 400 kW AC to DC Converter Energy Storage Power Conversion...250 hp traction motor Electrical Machines ISG Model • ISG model and its associated controls system – Automatic scaling – Scope of machines relevant

  11. Modelling of seed yield and its components in tall fescue ( Festuca ...

    African Journals Online (AJOL)

    Ridge regression analysis was used to derive a steady algorithmic model that related Z to the five components; Y1 to Y5. This model can estimate Z precisely from the values of these components. Furthermore, an approach based on the exponents of the algorithmic model could be applied to the selection for high seed yield ...

  12. Application of Test-day Models for Variance Components Estimation ...

    African Journals Online (AJOL)

    Julio Carvalheira

    Random regression (RR) models have been widely studied and evaluated for genetic evaluation ... The random regression allows to fit sub-models for adjusting the lactation curve, assumes a structure for genetic ... term environmental effects accounting for the autocorrelations due to cow within each lactation, and e is the.

  13. Rare earth-doped integrated glass components: modeling and optimization

    DEFF Research Database (Denmark)

    Lumholt, Ole; Bjarklev, Anders Overgaard; Rasmussen, Thomas

    1995-01-01

    For the integrated optic erbium-doped phosphate silica-amplifier, a comprehensive model is presented which includes high-concentration dissipative ion-ion interactions. Based on actual waveguide parameters, the model is seen to reproduce measured gains closely. A rigorous design optimization is p...

  14. PyCatch: Component based hydrological catchment modelling

    NARCIS (Netherlands)

    Lana-Renault, N.; Karssenberg, D.J.

    2013-01-01

    Dynamic numerical models are powerful tools for representing and studying environmental processes through time. Usually they are constructed with environmental modelling languages, which are high-level programming languages that operate at the level of thinking of the scientists. In this paper we

  15. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  16. The 2013 European Seismic Hazard Model: key components and results

    OpenAIRE

    Jochen Woessner; Danciu Laurentiu; Domenico Giardini; Helen Crowley; Fabrice Cotton; G. Grünthal; Gianluca Valensise; Ronald Arvidsson; Roberto Basili; Mine Betül Demircioglu; Stefan Hiemer; Carlo Meletti; Roger W. Musson; Andrea N. Rovida; Karin Sesetyan

    2015-01-01

    The 2013 European Seismic Hazard Model (ESHM13) results from a community-based probabilistic seismic hazard assessment supported by the EU-FP7 project “Seismic Hazard Harmonization in Europe” (SHARE, 2009–2013). The ESHM13 is a consistent seismic hazard model for Europe and Turkey which overcomes the limitation of national borders and includes a through quantification of the uncertainties. It is the first completed regional effort contributing to the “Global Earthquake Model” initiative. It m...

  17. Mouse models of neurodegenerative disease: preclinical imaging and neurovascular component.

    Science.gov (United States)

    Albanese, Sandra; Greco, Adelaide; Auletta, Luigi; Mancini, Marcello

    2017-10-26

    Neurodegenerative diseases represent great challenges for basic science and clinical medicine because of their prevalence, pathologies, lack of mechanism-based treatments, and impacts on individuals. Translational research might contribute to the study of neurodegenerative diseases. The mouse has become a key model for studying disease mechanisms that might recapitulate in part some aspects of the corresponding human diseases. Neurodegenerative disorders are very complicated and multifactorial. This has to be taken in account when testing drugs. Most of the drugs screening in mice are very difficult to be interpretated and often useless. Mouse models could be condiderated a 'pathway models', rather than as models for the whole complicated construct that makes a human disease. Non-invasive in vivo imaging in mice has gained increasing interest in preclinical research in the last years thanks to the availability of high-resolution single-photon emission computed tomography (SPECT), positron emission tomography (PET), high field Magnetic resonance, Optical Imaging scanners and of highly specific contrast agents. Behavioral test are useful tool to characterize different animal models of neurodegenerative pathology. Furthermore, many authors have observed vascular pathological features associated to the different neurodegenerative disorders. Aim of this review is to focus on the different existing animal models of neurodegenerative disorders, describe behavioral tests and preclinical imaging techniques used for diagnose and describe the vascular pathological features associated to these diseases.

  18. System level modeling and component level control of fuel cells

    Science.gov (United States)

    Xue, Xingjian

    This dissertation investigates the fuel cell systems and the related technologies in three aspects: (1) system-level dynamic modeling of both PEM fuel cell (PEMFC) and solid oxide fuel cell (SOFC); (2) condition monitoring scheme development of PEM fuel cell system using model-based statistical method; and (3) strategy and algorithm development of precision control with potential application in energy systems. The dissertation first presents a system level dynamic modeling strategy for PEM fuel cells. It is well known that water plays a critical role in PEM fuel cell operations. It makes the membrane function appropriately and improves the durability. The low temperature operating conditions, however, impose modeling difficulties in characterizing the liquid-vapor two phase change phenomenon, which becomes even more complex under dynamic operating conditions. This dissertation proposes an innovative method to characterize this phenomenon, and builds a comprehensive model for PEM fuel cell at the system level. The model features the complete characterization of multi-physics dynamic coupling effects with the inclusion of dynamic phase change. The model is validated using Ballard stack experimental result from open literature. The system behavior and the internal coupling effects are also investigated using this model under various operating conditions. Anode-supported tubular SOFC is also investigated in the dissertation. While the Nernst potential plays a central role in characterizing the electrochemical performance, the traditional Nernst equation may lead to incorrect analysis results under dynamic operating conditions due to the current reverse flow phenomenon. This dissertation presents a systematic study in this regard to incorporate a modified Nernst potential expression and the heat/mass transfer into the analysis. The model is used to investigate the limitations and optimal results of various operating conditions; it can also be utilized to perform the

  19. A Bayesian Analysis of Unobserved Component Models Using Ox

    Directory of Open Access Journals (Sweden)

    Charles S. Bos

    2011-05-01

    Full Text Available This article details a Bayesian analysis of the Nile river flow data, using a similar state space model as other articles in this volume. For this data set, Metropolis-Hastings and Gibbs sampling algorithms are implemented in the programming language Ox. These Markov chain Monte Carlo methods only provide output conditioned upon the full data set. For filtered output, conditioning only on past observations, the particle filter is introduced. The sampling methods are flexible, and this advantage is used to extend the model to incorporate a stochastic volatility process. The volatility changes both in the Nile data and also in daily S&P 500 return data are investigated. The posterior density of parameters and states is found to provide information on which elements of the model are easily identifiable, and which elements are estimated with less precision.

  20. Modeling of a remote inspection system for NSSS components

    International Nuclear Information System (INIS)

    Choi, Yoo Rark; Kim, Jae Hee; Lee, Jae Cheol

    2003-03-01

    Safety inspection for safety-critical unit of nuclear power plant has been processed using off-line technology. Thus we can not access safety inspection system and inspection data via network such as internet. We are making an on-line control and data access system based on WWW and JAVA technologies which can be used during plant operation to overcome these problems. Users can access inspection systems and inspection data only using web-browser. This report discusses about analysis of the existing remote system and essential techniques such as Web, JAVA, client/server model, and multi-tier model. This report also discusses about a system modeling that we have been developed using these techniques and provides solutions for developing an on-line control and data access system

  1. A discrete surface growth model for two components

    International Nuclear Information System (INIS)

    El-Nashar, H.F.; Cerdeira, H.A.

    2000-04-01

    We present a ballistic deposition model for the surface growth of a binary species A and C. Numerical simulations of the growth kinetics show a deviation from the Kardar-Parisi-Zhang universality class, model valid for only one kind of deposited particles. The study also shows that when the deposition of particles with less active bonds occurs more frequently the voids under the surface become relevant. However, the increase in overhang/voids processes under the moving interface does not strengthen greatly the local surface gradient. (author)

  2. Checking Architectural and Implementation Constraints for Domain-Specific Component Frameworks using Models

    OpenAIRE

    Noguera, Carlos; Loiret, Frédéric

    2009-01-01

    Acceptance rate: 38%; International audience; Software components are used in various application domains, and many component models and frameworks have been proposed to fulfill domain-specific requirements. The ad-hoc development of these component frameworks hampers the reuse of tools and abstractions across different frameworks. We believe that in order to promote the reuse of components within various domain contexts an homogeneous design approach is needed. A key requirement of such an a...

  3. Feedback loops and temporal misalignment in component-based hydrologic modeling

    Science.gov (United States)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  4. The evolving neuroanatomical component of the Foundational Model of Anatomy.

    Science.gov (United States)

    Martin, Richard F; Rickard, Kurt; Mejino, José L V; Agoncillo, Augusto V; Brinkley, James F; Rosse, Cornelius

    2003-01-01

    In order to meet the need for an expressive ontology in neuroinformatics, we have integrated the extensive terminologies of NeuroNames and Terminologia Anatomica into the Foundational Model of Anatomy (FMA). We have enhanced the FMA to accommodate information unique to neuronal structures, such as axonal input/output relationships.

  5. Soil Structure - A Neglected Component of Land-Surface Models

    Science.gov (United States)

    Fatichi, S.; Or, D.; Walko, R. L.; Vereecken, H.; Kollet, S. J.; Young, M.; Ghezzehei, T. A.; Hengl, T.; Agam, N.; Avissar, R.

    2017-12-01

    Soil structure is largely absent in most standard sampling and measurements and in the subsequent parameterization of soil hydraulic properties deduced from soil maps and used in Earth System Models. The apparent omission propagates into the pedotransfer functions that deduce parameters of soil hydraulic properties primarily from soil textural information. Such simple parameterization is an essential ingredient in the practical application of any land surface model. Despite the critical role of soil structure (biopores formed by decaying roots, aggregates, etc.) in defining soil hydraulic functions, only a few studies have attempted to incorporate soil structure into models. They mostly looked at the effects on preferential flow and solute transport pathways at the soil profile scale; yet, the role of soil structure in mediating large-scale fluxes remains understudied. Here, we focus on rectifying this gap and demonstrating potential impacts on surface and subsurface fluxes and system wide eco-hydrologic responses. The study proposes a systematic way for correcting the soil water retention and hydraulic conductivity functions—accounting for soil-structure—with major implications for near saturated hydraulic conductivity. Modification to the basic soil hydraulic parameterization is assumed as a function of biological activity summarized by Gross Primary Production. A land-surface model with dynamic vegetation is used to carry out numerical simulations with and without the role of soil-structure for 20 locations characterized by different climates and biomes across the globe. Including soil structure affects considerably the partition between infiltration and runoff and consequently leakage at the base of the soil profile (recharge). In several locations characterized by wet climates, a few hundreds of mm per year of surface runoff become deep-recharge accounting for soil-structure. Changes in energy fluxes, total evapotranspiration and vegetation productivity

  6. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  7. Research on development model of nuclear component based on life cycle management

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    At present the development process of nuclear component, even nuclear component itself, is more and more supported by computer technology. This increasing utilization of the computer and software has led to the faster development of nuclear technology on one hand and also brought new problems on the other hand. Especially, the combination of hardware, software and humans has increased nuclear component system complexities to an unprecedented level. To solve this problem, Life Cycle Management technology is adopted in nuclear component system. Hence, an intensive discussion on the development process of a nuclear component is proposed. According to the characteristics of the nuclear component development, such as the complexities and strict safety requirements of the nuclear components, long-term design period, changeable design specifications and requirements, high capital investment, and satisfaction for engineering codes/standards, the development life-cycle model of nuclear component is presented. The development life-cycle model is classified at three levels, namely, component level development life-cycle, sub-component development life-cycle and component level verification/certification life-cycle. The purposes and outcomes of development processes are stated in detailed. A process framework for nuclear component based on system engineering and development environment of nuclear component is discussed for future research work. (authors)

  8. Repeat, Low Altitude Measurements of Vegetation Status and Biomass Using Manned Aerial and UAS Imagery in a Piñon-Juniper Woodland

    Science.gov (United States)

    Krofcheck, D. J.; Lippitt, C.; Loerch, A.; Litvak, M. E.

    2015-12-01

    Measuring the above ground biomass of vegetation is a critical component of any ecological monitoring campaign. Traditionally, biomass of vegetation was measured with allometric-based approach. However, it is also time-consuming, labor-intensive, and extremely expensive to conduct over large scales and consequently is cost-prohibitive at the landscape scale. Furthermore, in semi-arid ecosystems characterized by vegetation with inconsistent growth morphologies (e.g., piñon-juniper woodlands), even ground-based conventional allometric approaches are often challenging to execute consistently across individuals and through time, increasing the difficulty of the required measurements and consequently the accuracy of the resulting products. To constrain the uncertainty associated with these campaigns, and to expand the extent of our measurement capability, we made repeat measurements of vegetation biomass in a semi-arid piñon-juniper woodland using structure-from-motion (SfM) techniques. We used high-spatial resolution overlapping aerial images and high-accuracy ground control points collected from both manned aircraft and multi-rotor UAS platforms, to generate digital surface model (DSM) for our experimental region. We extracted high-precision canopy volumes from the DSM and compared these to the vegetation allometric data, s to generate high precision canopy volume models. We used these models to predict the drivers of allometric equations for Pinus edulis and Juniperous monosperma (canopy height, diameter at breast height, and root collar diameter). Using this approach, we successfully accounted for the carbon stocks in standing live and standing dead vegetation across a 9 ha region, which contained 12.6 Mg / ha of standing dead biomass, with good agreement to our field plots. Here we present the initial results from an object oriented workflow which aims to automate the biomass estimation process of tree crown delineation and volume calculation, and partition

  9. Component-oriented approach to the development and use of numerical models in high energy physics

    International Nuclear Information System (INIS)

    Amelin, N.S.; Komogorov, M.Eh.

    2002-01-01

    We discuss the main concepts of a component approach to the development and use of numerical models in high energy physics. This approach is realized as the NiMax software system. The discussed concepts are illustrated by numerous examples of the system user session. In appendix chapter we describe physics and numerical algorithms of the model components to perform simulation of hadronic and nuclear collisions at high energies. These components are members of hadronic application modules that have been developed with the help of the NiMax system. Given report is served as an early release of the NiMax manual mainly for model component users

  10. Modeling media as latent semantics based on cognitive components

    DEFF Research Database (Denmark)

    Petersen, Michael Kai

    Though one might think of media as an audiovisual stream of consciousness, we frequently encode frames of video sequences and waves of sound into strings of text. Language allows us to both share the internal representations of what we perceive as mental concepts, as well as categorizing them...... of media based on lyrics, synopses, subtitles, blogs or web pages associated with the content. In the proposed model the bottom-up generated sensory input is a matrix of tens of thousands of words co-occurring within multiple contexts, that are in turn represented as vectors in a semantic space of reduced...... with aspects of cognitive linguistics that potentially could be utilized in applications ranging from information retrieval and media personalization, to emotional brand building or neuroscientific modeling of syntax and semantics....

  11. Quantifying functional connectivity in multi-subject fMRI data using component models

    DEFF Research Database (Denmark)

    Madsen, Kristoffer Hougaard; Churchill, Nathan William; Mørup, Morten

    2017-01-01

    in the brain among groups of subjects. Component models can be used to define subspace representations of functional connectivity that are more interpretable. It is, however, unclear which component model provides the optimal representation of functional networks for multi-subject fMRI datasets. A flexible......-generalizing models account for subject variability within a common spatial subspace. Within this set of models, spatial Independent Component Analysis (sICA) on concatenated data provides more interpretable brain patterns, whereas a consistent-covariance model that accounts for subject-specific network scaling...

  12. Modeling Photoionization of Aqueous DNA and Its Components

    Czech Academy of Sciences Publication Activity Database

    Pluhařová, Eva; Slavíček, P.; Jungwirth, Pavel

    2015-01-01

    Roč. 48, č. 5 (2015), s. 1209-1217 ISSN 0001-4842 R&D Projects: GA ČR GBP208/12/G016 Grant - others:GA ČR(CZ) GA13-34168S Institutional support: RVO:61388963 Keywords : DNA * photoelectron spectroscopy * ab initio calculations * polarizable continuum solvent model Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 22.003, year: 2015

  13. Cosmology in one dimension: A two-component model

    International Nuclear Information System (INIS)

    Shiozawa, Yui; Miller, Bruce N.

    2016-01-01

    Highlights: • Constructed the first one-dimensional toy universe model with both conservative (dark) matter and dissipative (luminous) matter. • Simulated bottom-up structure formation with rich multifractal structures, a robust feature of 1-D models. • Demonstrated that mass-oriented multifractal analyses can be employed to investigate the evolution of a hierarchical clustering process. • Found a significant difference in the generalized fractal dimensions for each type of matter demonstrating distribution bias within clusters. - Abstract: We investigate structure formation in a one dimensional model of a matter-dominated universe using a quasi-Newtonian formulation. In addition to dissipation-free dark matter, dissipative luminous matter is introduced to examine the potential bias in the distributions. We use multifractal analysis techniques to identify scale-dependent structures, including clusters and voids. Both dark matter and luminous matter exhibit multifractal geometry over a finite range as the universe evolves in time. We present the results for the generalized dimensions computed on various scales for each matter distribution which clearly supports the bottom-up structure formation scenario. We compare and contrast the multifractal dimensions of two types of matter for the first time and show how dynamical considerations cause them to differ.

  14. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Carl Stern; Martin Lee

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  15. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    Jan 5, 2016 ... We propose a two-component dark matter (DM) model, each component of which is a real singlet scalar, to explain results from both direct and indirect detection experiments. We put the constraints on the model parameters from theoretical bounds, PLANCK relic density results and direct DM experiments.

  16. DEVELOPMENT OF CAPE-OPEN COMPLIANT PROCESS MODELING COMPONENTS IN MICROSOFT .NET

    Science.gov (United States)

    The CAPE-OPEN middleware standards were created to allow process modeling components (PMCs) developed by third parties to be used in any process modeling environment (PME) utilizing these standards. The CAPE-OPEN middleware specifications were based upon both Microsoft's Compone...

  17. Mathematical Model for Multicomponent Adsorption Equilibria Using Only Pure Component Data

    DEFF Research Database (Denmark)

    Marcussen, Lis

    2000-01-01

    A mathematical model for nonideal adsorption equilibria in multicomponent mixtures is developed. It is applied with good results for pure substances and for prediction of strongly nonideal multicomponent equilibria using only pure component data. The model accounts for adsorbent-adsorbate and ads......A mathematical model for nonideal adsorption equilibria in multicomponent mixtures is developed. It is applied with good results for pure substances and for prediction of strongly nonideal multicomponent equilibria using only pure component data. The model accounts for adsorbent...

  18. Three Fundamental Components of the Autopoiesic Leadership Model

    Directory of Open Access Journals (Sweden)

    Mateja Kalan

    2017-06-01

    Full Text Available Research Question (RQ: What type of leadership could be developed upon transformational leadership? Purpose: The purpose of the research was to create a new leadership style. Its variables can be further developed upon transformational leadership variables. Namely, this leadership style is known as a successful leadership style in successful organisations. Method: In the research of published papers from scientific databases, we relied on the triangulation of theories. To clarify the research question, we have researched different authors, who based their research papers on different hypotheses. In some articles, hypotheses were even contradictory. Results: Through the research, we have concluded that authors often changed certain variables when researching the topic of transformational leadership. We have correlated these variables and developed a new model, naming it autopoiesic leadership. Its main variables are (1 goal orientation, (2 emotional sensitivity, and (3 manager’s flexibility in organisations. Organisation: Our research can have a positive effect on managers in terms of recognising the importance of selected variables. Practical application of autopoiesic leadership can imply more efficiency in business processes of a company, increasing its financial performance. Society: Autopoiesic leadership is a leadership style that largely influences the use of the individual’s internal resources. Thus, she or he becomes internally motivated, and this is the basis for quality work. This strengthens employees’ social aspect which consequently also has a positive effect on their life outside the organisational system, i.e. their family and broader living environment. Originality: In the worldwide literature, we have noticed the concept autopoiesis in papers about management subjects, but the autopoiesic leadership model has not been developed so far. Limitations / Future Research: We based our research on the triangulation of theories

  19. Recovery capital pathways: Modelling the components of recovery wellbeing.

    Science.gov (United States)

    Cano, Ivan; Best, David; Edwards, Michael; Lehman, John

    2017-12-01

    In recent years, there has been recognition that recovery is a journey that involves the growth of recovery capital. Thus, recovery capital has become a commonly used term in addiction treatment and research yet its operationalization and measurement has been limited. Due to these limitations, there is little understanding of long-term recovery pathways and their clinical application. We used the data of 546 participants from eight different recovery residences spread across Florida, USA. We calculated internal consistency for recovery capital and wellbeing, then assessed their factor structure via confirmatory factor analysis. The relationships between time, recovery barriers and strengths, wellbeing and recovery capital, as well as the moderating effect of gender, were estimated using structural equations modelling. The proposed model obtained an acceptable fit (χ 2 (141, N=546)=533.642, pwellbeing. Gender differences were observed. We tested the pathways to recovery for residents in the recovery housing population. Our results have implications not only for retention as a predictor of sustained recovery and wellbeing but also for the importance of meaningful activities in promoting recovery capital and wellbeing. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Dynamic Modeling of Solar Dynamic Components and Systems

    Science.gov (United States)

    Hochstein, John I.; Korakianitis, T.

    1992-01-01

    The purpose of this grant was to support NASA in modeling efforts to predict the transient dynamic and thermodynamic response of the space station solar dynamic power generation system. In order to meet the initial schedule requirement of providing results in time to support installation of the system as part of the initial phase of space station, early efforts were executed with alacrity and often in parallel. Initially, methods to predict the transient response of a Rankine as well as a Brayton cycle were developed. Review of preliminary design concepts led NASA to select a regenerative gas-turbine cycle using a helium-xenon mixture as the working fluid and, from that point forward, the modeling effort focused exclusively on that system. Although initial project planning called for a three year period of performance, revised NASA schedules moved system installation to later and later phases of station deployment. Eventually, NASA selected to halt development of the solar dynamic power generation system for space station and to reduce support for this project to two-thirds of the original level.

  1. The dynamic cusp at low altitudes: A case study utilizing Viking, DMSP-F7 and Sondrestrom incoherent scatter radar observations

    Science.gov (United States)

    Watermann, J.; De La Beaujardiere, O.; Lummerzheim, D.; Woch, J.; Newell, P. T.; Potemra, T. A.; Rich, F. J.; Shapshak, M.

    1994-01-01

    Coincident multi-instrument magnetospheric and ionospheric observations have made it possible to determine the position of the ionospheric footprint of the magnetospheric cusp and to monitor its evolution over time. The data used include charged particle and magnetic field measurements from the Earth-orbiting Viking and DMSP-F7 satellites, electric field measurements from Viking, interplanetary magnetic field and plasma data from IMP-8, and Sondrestrom incoherent scatter radar observations of the ionospheric plasma density, temperature, and convection. Viking detected cusp precipitation poleward of 75.5 deg invariant latitude. The ionospheric response to the observed electron precipitation was simulated using an auroral model. It predicts enhanced plasma density and elevated electron temperature in the upper E- and F- regions. Sondrestrom radar observations are in agreement with the predictions. The radar detected a cusp signature on each of five consecutive antenna elevation scans covering 1.2h local time. The cusp appeared to be about 2 deg invariant latitude wide, and its ionospheric footprint shifted equatorward by nearly 2 deg during this time, possibly influenced by an overall decrease in the interplanetary magnetic field (IMF) B(sub z) component. The radar plasma drift data and the Viking magnetic and electric field data suggest that the cusp was associated with a continuous, rather than a patchy, merging between the IMF and the geomagnetic field.

  2. Validation of a blood marker for plasma volume in endurance athletes during a live-high train-low altitude training camp.

    Science.gov (United States)

    Lobigs, Louisa M; Garvican-Lewis, Laura A; Vuong, Victor L; Tee, Nicolin; Gore, Christopher J; Peeling, Peter; Dawson, Brian; Schumacher, Yorck O

    2018-02-19

    Altitude is a confounding factor within the Athlete Biological Passport (ABP) due, in part, to the plasma volume (PV) response to hypoxia. Here, a newly developed PV blood test is applied to assess the possible efficacy of reducing the influence of PV on the volumetric ABP markers; haemoglobin concentration ([Hb]) and the OFF-score. Endurance athletes (n=34) completed a 21-night simulated live-high train-low (LHTL) protocol (14 h.d -1 at 3000 m). Bloods were collected twice pre-altitude; at days 3, 8, and 15 at altitude; and 1, 7, 21, and 42 days post-altitude. A full blood count was performed on the whole blood sample. Serum was analysed for transferrin, albumin, calcium, creatinine, total protein, and low-density lipoprotein. The PV blood test (consisting of the serum markers, [Hb] and platelets) was applied to the ABP adaptive model and new reference predictions were calculated for [Hb] and the OFF-score, thereby reducing the PV variance component. The PV correction refined the ABP reference predictions. The number of atypical passport findings (ATPFs) for [Hb] was reduced from 7 of 5 subjects to 6 of 3 subjects. The OFF-score ATPFs increased with the PV correction (from 9 to 13, 99% specificity); most likely the result of more specific reference limit predictions combined with the altitude-induced increase in red cell production. Importantly, all abnormal biomarker values were identified by a low confidence value. Although the multifaceted, individual physiological response to altitude confounded some results, the PV model appears capable of reducing the impact of PV fluctuations on [Hb]. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  4. A two-component rain model for the prediction of attenuation statistics

    Science.gov (United States)

    Crane, R. K.

    1982-01-01

    A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.

  5. The Component Slope Linear Model for Calculating Intensive Partial Molar Properties: Application to Waste Glasses

    International Nuclear Information System (INIS)

    Reynolds, Jacob G.

    2013-01-01

    Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a change in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH 4 H 2 O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H 2 O, NaOH, and NaAl(OH) 4 are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components

  6. The Component Model of Infrastructure: A Practical Approach to Understanding Public Health Program Infrastructure

    Science.gov (United States)

    Snyder, Kimberly; Rieker, Patricia P.

    2014-01-01

    Functioning program infrastructure is necessary for achieving public health outcomes. It is what supports program capacity, implementation, and sustainability. The public health program infrastructure model presented in this article is grounded in data from a broader evaluation of 18 state tobacco control programs and previous work. The newly developed Component Model of Infrastructure (CMI) addresses the limitations of a previous model and contains 5 core components (multilevel leadership, managed resources, engaged data, responsive plans and planning, networked partnerships) and 3 supporting components (strategic understanding, operations, contextual influences). The CMI is a practical, implementation-focused model applicable across public health programs, enabling linkages to capacity, sustainability, and outcome measurement. PMID:24922125

  7. A Critical Synthesis of Scientific Research on Business Models and Business Model Components

    Directory of Open Access Journals (Sweden)

    Roxana CLODNIȚCHI

    2017-12-01

    Full Text Available The current volatile economic environment, globalization and evermore shorter technology cycles impact the way business is done today. Business modelling proves itself as an instrument, which may impact decisively the success or failure of a business. This is why both the business and academic community critically address this issue. The aim of this article is to contribute to the development of a unifying research agenda by synthesising the most relevant scientific research and studies. The author reviewed and analysed the scientific theoretical framework on this subject from the past 15 years. The research result consists in a systematisation on past approaches on business modelling stressing the components as they are defined by contemporary scholars. By doing this, the author aims at reconciling the fragmented and only partially overlapping definition of the concept of “business model”.

  8. Discrete and continuous reliability models for systems with identically distributed correlated components

    International Nuclear Information System (INIS)

    Fiondella, Lance; Xing, Liudong

    2015-01-01

    Many engineers and researchers base their reliability models on the assumption that components of a system fail in a statistically independent manner. This assumption is often violated in practice because environmental and system specific factors contribute to correlated failures, which can lower the reliability of a fault tolerant system. A simple method to quantify the impact of correlation on system reliability is needed to encourage models explicitly incorporating correlated failures. Previous approaches to model correlation are limited to systems consisting of two or three components or assume that the majority of the subsets of component failures are statistically independent. This paper proposes a method to model the reliability of systems with correlated identical components, where components possess the same reliability and also exhibit a common failure correlation parameter. Both discrete and continuous models are proposed. The method is demonstrated through a series of examples, including derivations of analytical expressions for several common structures such as k-out-of-n: good and parallel systems. The continuous models consider the role of correlation on reliability and metrics, including mean time to failure, availability, and mean residual life. These examples illustrate that the method captures the impact of component correlation on system reliability and related metrics. - Highlights: • Reliability of systems with identical but correlated components are studied. • Correlation lowers the reliability and mean time to failure of fault tolerant systems. • Correlation lowers the availability and mean residual life of fault tolerant systems

  9. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  10. Towards system-level modeling and characterization of components for intravenous therapy

    NARCIS (Netherlands)

    Alveringh, Dennis; Wiegerink, Remco J.; Lötters, Joost Conrad

    2014-01-01

    Problems occur regularly with intravenous therapy, especially with the flow behavior. A mechanical model can predict which components of intravenous therapy systems introduce non-ideal effects in the flow. This study concentrates on gaining quantitative information of each separate component for

  11. The Validity of the Three-Component Model of Organizational Commitment in a Chinese Context.

    Science.gov (United States)

    Cheng, Yuqiu; Stockdale, Margaret S.

    2003-01-01

    The construct validity of a three-component model of organizational commitment was tested with 226 Chinese employees. Affective and normative commitment significantly predicted job satisfaction; all three components predicted turnover intention. Compared with Canadian (n=603) and South Korean (n=227) samples, normative and affective commitment…

  12. Coordinated Cluster, ground-based instrumentation and low-altitude satellite observations of transient poleward-moving events in the ionosphere and in the tail lobe

    Directory of Open Access Journals (Sweden)

    M. Lockwood

    Full Text Available During the interval between 8:00–9:30 on 14 January 2001, the four Cluster spacecraft were moving from the central magnetospheric lobe, through the dusk sector mantle, on their way towards intersecting the magnetopause near 15:00 MLT and 15:00 UT. Throughout this interval, the EISCAT Svalbard Radar (ESR at Longyearbyen observed a series of poleward-moving transient events of enhanced F-region plasma concentration ("polar cap patches", with a repetition period of the order of 10 min. Allowing for the estimated solar wind propagation delay of 75 ( ± 5 min, the interplanetary magnetic field (IMF had a southward component during most of the interval. The magnetic footprint of the Cluster spacecraft, mapped to the ionosphere using the Tsyganenko T96 model (with input conditions prevailing during this event, was to the east of the ESR beams. Around 09:05 UT, the DMSP-F12 satellite flew over the ESR and showed a sawtooth cusp ion dispersion signature that also extended into the electrons on the equatorward edge of the cusp, revealing a pulsed magnetopause reconnection. The consequent enhanced ionospheric flow events were imaged by the SuperDARN HF backscatter radars. The average convection patterns (derived using the AMIE technique on data from the magnetometers, the EISCAT and SuperDARN radars, and the DMSP satellites show that the associated poleward-moving events also convected over the predicted footprint of the Cluster spacecraft. Cluster observed enhancements in the fluxes of both electrons and ions. These events were found to be essentially identical at all four spacecraft, indicating that they had a much larger spatial scale than the satellite separation of the order of 600 km. Some of the events show a correspondence between the lowest energy magnetosheath electrons detected by the PEACE instrument on Cluster (10–20 eV and the topside ionospheric enhancements seen by the ESR (at 400–700 km. We suggest that a potential barrier at the

  13. Coordinated Cluster, ground-based instrumentation and low-altitude satellite observations of transient poleward-moving events in the ionosphere and in the tail lobe

    Directory of Open Access Journals (Sweden)

    M. Lockwood

    2001-09-01

    Full Text Available During the interval between 8:00–9:30 on 14 January 2001, the four Cluster spacecraft were moving from the central magnetospheric lobe, through the dusk sector mantle, on their way towards intersecting the magnetopause near 15:00 MLT and 15:00 UT. Throughout this interval, the EISCAT Svalbard Radar (ESR at Longyearbyen observed a series of poleward-moving transient events of enhanced F-region plasma concentration ("polar cap patches", with a repetition period of the order of 10 min. Allowing for the estimated solar wind propagation delay of 75 ( ± 5 min, the interplanetary magnetic field (IMF had a southward component during most of the interval. The magnetic footprint of the Cluster spacecraft, mapped to the ionosphere using the Tsyganenko T96 model (with input conditions prevailing during this event, was to the east of the ESR beams. Around 09:05 UT, the DMSP-F12 satellite flew over the ESR and showed a sawtooth cusp ion dispersion signature that also extended into the electrons on the equatorward edge of the cusp, revealing a pulsed magnetopause reconnection. The consequent enhanced ionospheric flow events were imaged by the SuperDARN HF backscatter radars. The average convection patterns (derived using the AMIE technique on data from the magnetometers, the EISCAT and SuperDARN radars, and the DMSP satellites show that the associated poleward-moving events also convected over the predicted footprint of the Cluster spacecraft. Cluster observed enhancements in the fluxes of both electrons and ions. These events were found to be essentially identical at all four spacecraft, indicating that they had a much larger spatial scale than the satellite separation of the order of 600 km. Some of the events show a correspondence between the lowest energy magnetosheath electrons detected by the PEACE instrument on Cluster (10–20 eV and the topside ionospheric enhancements seen by the ESR (at 400–700 km. We suggest that a potential barrier at the

  14. NASTRAN Modeling of Flight Test Components for UH-60A Airloads Program Test Configuration

    Science.gov (United States)

    Idosor, Florentino R.; Seible, Frieder

    1993-01-01

    Based upon the recommendations of the UH-60A Airloads Program Review Committee, work towards a NASTRAN remodeling effort has been conducted. This effort modeled and added the necessary structural/mass components to the existing UH-60A baseline NASTRAN model to reflect the addition of flight test components currently in place on the UH-60A Airloads Program Test Configuration used in NASA-Ames Research Center's Modern Technology Rotor Airloads Program. These components include necessary flight hardware such as instrument booms, movable ballast cart, equipment mounting racks, etc. Recent modeling revisions have also been included in the analyses to reflect the inclusion of new and updated primary and secondary structural components (i.e., tail rotor shaft service cover, tail rotor pylon) and improvements to the existing finite element mesh (i.e., revisions of material property estimates). Mode frequency and shape results have shown that components such as the Trimmable Ballast System baseplate and its respective payload ballast have caused a significant frequency change in a limited number of modes while only small percent changes in mode frequency are brought about with the addition of the other MTRAP flight components. With the addition of the MTRAP flight components, update of the primary and secondary structural model, and imposition of the final MTRAP weight distribution, modal results are computed representative of the 'best' model presently available.

  15. Detailed finite element method modeling of evaporating multi-component droplets

    Energy Technology Data Exchange (ETDEWEB)

    Diddens, Christian, E-mail: C.Diddens@tue.nl

    2017-07-01

    The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet. Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.

  16. Model-Checking of Component-Based Event-Driven Real-Time Embedded Software

    National Research Council Canada - National Science Library

    Gu, Zonghua; Shin, Kang G

    2005-01-01

    .... We discuss application of model-checking to verify system-level concurrency properties of component-based real-time embedded software based on CORBA Event Service, using Avionics Mission Computing...

  17. Conservative modelling of the moisture and heat transfer in building components under atmospheric excitation

    DEFF Research Database (Denmark)

    Janssen, Hans; Blocken, Bert; Carmeliet, Jan

    2007-01-01

    While the transfer equations for moisture and heat in building components are currently undergoing standardisation, atmospheric boundary conditions, conservative modelling and numerical efficiency are not addressed. In a first part, this paper adds a comprehensive description of those boundary...

  18. Model-Based Design Tools for Extending COTS Components To Extreme Environments, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation in this project is model-based design (MBD) tools for predicting the performance and useful life of commercial-off-the-shelf (COTS) components and...

  19. Multi-component fiber track modelling of diffusion-weighted magnetic resonance imaging data

    Directory of Open Access Journals (Sweden)

    Yasser M. Kadah

    2010-01-01

    Full Text Available In conventional diffusion tensor imaging (DTI based on magnetic resonance data, each voxel is assumed to contain a single component having diffusion properties that can be fully represented by a single tensor. Even though this assumption can be valid in some cases, the general case involves the mixing of components, resulting in significant deviation from the single tensor model. Hence, a strategy that allows the decomposition of data based on a mixture model has the potential of enhancing the diagnostic value of DTI. This project aims to work towards the development and experimental verification of a robust method for solving the problem of multi-component modelling of diffusion tensor imaging data. The new method demonstrates significant error reduction from the single-component model while maintaining practicality for clinical applications, obtaining more accurate Fiber tracking results.

  20. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    component dark matter model with real singlet scalars confronting GeV -ray excess from galactic centre and Fermi bubble. Debasish Majumdar Kamakshya Prasad Modak Subhendu Rakshit. Special: Cosmology Volume 86 Issue ...

  1. A proposed centralised distribution model for the South African automotive component industry

    Directory of Open Access Journals (Sweden)

    Micheline J. Naude

    2009-12-01

    Full Text Available Purpose: This article explores the possibility of developing a distribution model, similar to the model developed and implemented by the South African pharmaceutical industry, which could be implemented by automotive component manufacturers for supply to independent retailers. Problem Investigated: The South African automotive components distribution chain is extensive with a number of players of varying sizes, from the larger spares distribution groups to a number of independent retailers. Distributing to the smaller independent retailers is costly for the automotive component manufacturers. Methodology: This study is based on a preliminary study of an explorative nature. Interviews were conducted with a senior staff member from a leading automotive component manufacturer in KwaZulu Natal and nine participants at a senior management level at five of their main customers (aftermarket retailers. Findings: The findings from the empirical study suggest that the aftermarket component industry is mature with the role players well established. The distribution chain to the independent retailer is expensive in terms of transaction and distribution costs for the automotive component manufacturer. A proposed centralised distribution model for supply to independent retailers has been developed which should reduce distribution costs for the automotive component manufacturer in terms of (1 the lowest possible freight rate; (2 timely and controlled delivery; and (3 reduced congestion at the customer's receiving dock. Originality: This research is original in that it explores the possibility of implementing a centralised distribution model for independent retailers in the automotive component industry. Furthermore, there is a dearth of published research on the South African automotive component industry particularly addressing distribution issues. Conclusion: The distribution model as suggested is a practical one and should deliver added value to automotive

  2. Seismic assessment and performance of nonstructural components affected by structural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Jieun; Althoff, Eric; Sezen, Halil; Denning, Richard; Aldemir, Tunc [Ohio State University, Columbus (United States)

    2017-03-15

    Seismic probabilistic risk assessment (SPRA) requires a large number of simulations to evaluate the seismic vulnerability of structural and nonstructural components in nuclear power plants. The effect of structural modeling and analysis assumptions on dynamic analysis of 3D and simplified 2D stick models of auxiliary buildings and the attached nonstructural components is investigated. Dynamic characteristics and seismic performance of building models are also evaluated, as well as the computational accuracy of the models. The presented results provide a better understanding of the dynamic behavior and seismic performance of auxiliary buildings. The results also help to quantify the impact of uncertainties associated with modeling and analysis of simplified numerical models of structural and nonstructural components subjected to seismic shaking on the predicted seismic failure probabilities of these systems.

  3. A component-based approach to integrated modeling in the geosciences: The design of CSDMS

    Science.gov (United States)

    Peckham, Scott D.; Hutton, Eric W. H.; Norris, Boyana

    2013-04-01

    Development of scientific modeling software increasingly requires the coupling of multiple, independently developed models. Component-based software engineering enables the integration of plug-and-play components, but significant additional challenges must be addressed in any specific domain in order to produce a usable development and simulation environment that also encourages contributions and adoption by entire communities. In this paper we describe the challenges in creating a coupling environment for Earth-surface process modeling and the innovative approach that we have developed to address them within the Community Surface Dynamics Modeling System.

  4. The n-component cubic model and flows: subgraph break-collapse method

    International Nuclear Information System (INIS)

    Essam, J.W.; Magalhaes, A.C.N. de.

    1988-01-01

    We generalise to the n-component cubic model the subgraph break-collapse method which we previously developed for the Potts model. The relations used are based on expressions which we recently derived for the Z(λ) model in terms of mod-λ flows. Our recursive algorithm is similar, for n = 2, to the break-collapse method for the Z(4) model proposed by Mariz and coworkers. It allows the exact calculation for the partition function and correlation functions for n-component cubic clusters with n as a variable, without the need to examine all of the spin configurations. (author) [pt

  5. Refinement and verification in component-based model-driven design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    developed, all models constructed in each phase are verifiable. This requires that the modelling notations are formally defined and related in order to have tool support developed for the integration of sophisticated checkers, generators and transformations. This paper summarises our research on the method...... of Refinement of Component and Object Systems (rCOS) and illustrates it with experiences from the work on the Common Component Modelling Example (CoCoME). This gives evidence that the formal techniques developed in rCOS can be integrated into a model-driven development process and shows where it may...

  6. Principal components and generalized linear modeling in the correlation between hospital admissions and air pollution

    Science.gov (United States)

    de Souza, Juliana Bottoni; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição

    2014-01-01

    OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged < 6, and daily concentrations of air pollutants (PM10, SO2, NO2, O3 and CO) were analyzed in the Região da Grande Vitória, ES, Southeastern Brazil, from January 2005 to December 2010. For statistical analysis, two techniques were combined: Poisson regression with generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive – while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model − principal component analysis, showed better results in estimating relative risk and quality of fit. PMID:25119940

  7. Principal components and generalized linear modeling in the correlation between hospital admissions and air pollution.

    Science.gov (United States)

    Souza, Juliana Bottoni de; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição

    2014-06-01

    OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive - while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model - principal component analysis, showed better results in estimating relative risk and quality of fit.

  8. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  9. Two component WIMP-FImP dark matter model with singlet fermion, scalar and pseudo scalar

    Energy Technology Data Exchange (ETDEWEB)

    Dutta Banik, Amit; Pandey, Madhurima; Majumdar, Debasish [Saha Institute of Nuclear Physics, HBNI, Astroparticle Physics and Cosmology Division, Kolkata (India); Biswas, Anirban [Harish Chandra Research Institute, Allahabad (India)

    2017-10-15

    We explore a two component dark matter model with a fermion and a scalar. In this scenario the Standard Model (SM) is extended by a fermion, a scalar and an additional pseudo scalar. The fermionic component is assumed to have a global U(1){sub DM} and interacts with the pseudo scalar via Yukawa interaction while a Z{sub 2} symmetry is imposed on the other component - the scalar. These ensure the stability of both dark matter components. Although the Lagrangian of the present model is CP conserving, the CP symmetry breaks spontaneously when the pseudo scalar acquires a vacuum expectation value (VEV). The scalar component of the dark matter in the present model also develops a VEV on spontaneous breaking of the Z{sub 2} symmetry. Thus the various interactions of the dark sector and the SM sector occur through the mixing of the SM like Higgs boson, the pseudo scalar Higgs like boson and the singlet scalar boson. We show that the observed gamma ray excess from the Galactic Centre as well as the 3.55 keV X-ray line from Perseus, Andromeda etc. can be simultaneously explained in the present two component dark matter model and the dark matter self interaction is found to be an order of magnitude smaller than the upper limit estimated from the observational results. (orig.)

  10. The Peculiarities of Identifying the Components of a Business Model of Restaurant Industry Enterprise

    Directory of Open Access Journals (Sweden)

    Grosul Victoria A.

    2017-06-01

    Full Text Available The article substantiates the need for elaborating an efficient business model, implementation of which would enable enterprises of restaurant industry to create sustainable competitive advantages and would contribute to successful development in the long term. The basic scientific approaches to defining the business model components have been allocated. The main emphases and standard elements of a business model of enterprise in terms of each of the scientific approaches have been defined. The basic components of a business model of restaurant industry enterprise have been identified, taking into account the pivotal interrelated management processes: production, sales, consumption organization. The characteristics of each component of the business model of enterprise of restaurant industry have been provided in accordance with objectives of its activity in the context of efficient strategical decisions.

  11. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel

    2017-04-06

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys\\' priors, are designed to support Occam\\'s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  12. Pilot study on the effects of a 2-week hiking vacation at moderate versus low altitude on plasma parameters of carbohydrate and lipid metabolism in patients with metabolic syndrome.

    Science.gov (United States)

    Gutwenger, Ivana; Hofer, Georg; Gutwenger, Anna K; Sandri, Marco; Wiedermann, Christian J

    2015-03-28

    Hypoxic and hypobaric conditions may augment the beneficial influence of training on cardiovascular risk factors. This pilot study aimed to explore for effects of a two-week hiking vacation at moderate versus low altitude on adipokines and parameters of carbohydrate and lipid metabolism in patients with metabolic syndrome. Fourteen subjects (mean age: 55.8 years, range: 39 - 69) with metabolic syndrome participated in a 2-week structured training program (3 hours of guided daily hiking 4 times a week, training intensity at 55-65% of individual maximal heart rate; total training time, 24 hours). Participants were divided for residence and training into two groups, one at moderate altitude (1,900 m; n = 8), and the other at low altitude (300 m; n = 6). Anthropometric, cardiovascular and metabolic parameters were measured before and after the training period. In study participants, training overall reduced circulating levels of total cholesterol (p = 0.024), low-density lipoprotein cholesterol (p = 0.025) and adiponectin (p training at moderate altitude (n = 8), lowering effects on circulating levels were significant not only for total cholesterol, low-density-lipoprotein cholesterol and adiponectin (all, p altitude group (n = 6), none of the lipid parameters was significantly changed (each p > 0.05). Hiking-induced relative changes of triglyceride levels were positively associated with reductions in leptin levels (p = 0.006). As compared to 300 m altitude, training at 1,900 m showed borderline significant differences in the pre-post mean reduction rates of triglyceride (p = 0.050) and leptin levels (p = 0.093). Preliminary data on patients with metabolic syndrome suggest that a 2-week hiking vacation at moderate altitude may be more beneficial for adipokines and parameters of lipid metabolism than training at low altitude. In order to draw firm conclusions regarding better corrections of dyslipidemia and metabolic syndrome by physical exercise under mild hypobaric

  13. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    Science.gov (United States)

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A Comparative Study of the Proposed Models for the Components of the National Health Information System

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-01-01

    Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the

  15. A comparative study of the proposed models for the components of the national health information system.

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  16. FlightDynLib: An Object-Oriented Model Component Library for Constructing Multi-Disciplinary Aircraft Dynamics Models

    OpenAIRE

    Looye, G.; Hecker, S.; Kier, T.; Reschke, C.

    2005-01-01

    In this paper a model component library for developing multi-disciplinary aircraft flight dynamics models is presented, named FlightDynLib. This library is based on the object-oriented modelling language Modelica that has been designed for modelling of large scale multi-physics systems. The flight dynamics library allows for graphical construction of comlex rigid as well as flexible aircraft dynamics models and is fully compatible with other available libraries for electronics, thermodynamics...

  17. Abstract behavior types : a foundation model for components and their composition

    NARCIS (Netherlands)

    F. Arbab (Farhad)

    2003-01-01

    textabstractThe notion of Abstract Data Type (ADT) has served as a foundation model for structured and object oriented programming for some thirty years. The current trend in software engineering toward component based systems requires a foundation model as well. The most basic inherent property of

  18. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  19. Enhanced hepatic insulin signaling in the livers of high altitude native rats under basal conditions and in the livers of low altitude native rats under insulin stimulation: a mechanistic study.

    Science.gov (United States)

    Al Dera, Hussain; Eleawa, Samy M; Al-Hashem, Fahaid H; Mahzari, Moeber M; Hoja, Ibrahim; Al Khateeb, Mahmoud

    2017-07-01

    This study was designed to investigate the role of the liver in lowering fasting blood glucose levels (FBG) in rats native to high (HA) and low altitude (LA) areas. As compared with LA natives, besides the improved insulin and glucose tolerance, HA native rats had lower FBG, at least mediated by inhibition of hepatic gluconeogenesis and activation of glycogen synthesis. An effect that is mediated by the enhancement of hepatic insulin signaling mediated by the decreased phosphorylation of TSC induced inhibition of mTOR function. Such effect was independent of activation of AMPK nor stabilization of HIF1α, but most probably due to oxidative stress induced REDD1 expression. However, under insulin stimulation, and in spite of the less activated mTOR function in HA native rats, LA native rats had higher glycogen content and reduced levels of gluconeogenic enzymes with a more enhanced insulin signaling, mainly due to higher levels of p-IRS1 (tyr612).

  20. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    Science.gov (United States)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  1. MODELING THERMAL DUST EMISSION WITH TWO COMPONENTS: APPLICATION TO THE PLANCK HIGH FREQUENCY INSTRUMENT MAPS

    International Nuclear Information System (INIS)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2015-01-01

    We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales

  2. MODELING THERMAL DUST EMISSION WITH TWO COMPONENTS: APPLICATION TO THE PLANCK HIGH FREQUENCY INSTRUMENT MAPS

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, Aaron M.; Finkbeiner, Douglas P., E-mail: ameisner@fas.harvard.edu, E-mail: dfinkbeiner@cfa.harvard.edu [Department of Physics, Harvard University, 17 Oxford Street, Cambridge, MA 02138 (United States)

    2015-01-10

    We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales.

  3. Modeling Thermal Dust Emission with Two Components: Application to the Planck High Frequency Instrument Maps

    Science.gov (United States)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2015-01-01

    We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales.

  4. A Methodology for Modeling the Flow of Military Personnel Across Air Force Active and Reserve Components

    Science.gov (United States)

    2016-01-01

    capability to estimate the historic impact of changes in economic conditions on the flows of labor into, between, and out of the Air Force active...C O R P O R A T I O N Research Report A Methodology for Modeling the Flow of Military Personnel Across Air Force Active and Reserve Components...or considered about the effect that those policies might have on personnel flows into and out of other components. The degree to which this is

  5. Review of modeling and control during transport airdrop process

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2016-12-01

    Full Text Available This article presents the review of modeling and control during the airdrop process of transport aircraft. According to the airdrop height, technology can be classified into high and low altitude airdrop and in this article, the research is reviewed based on the two scenarios. While high altitude airdrop is mainly focusing on the precise landing control of cargo, the low altitude flight airdrop is on the control of transport aircraft dynamics to ensure flight safety. The history of high precision airdrop system is introduced first, and then the modeling and control problem of the ultra low altitude airdrop in transport aircraft is presented. Finally, the potential problems and future direction of low altitude airdrop are discussed.

  6. Facilitating Performance Optimization of RF PCB Designs by using Parametric Finite-Element Component Models

    DEFF Research Database (Denmark)

    Rohde, John; Toftegaard, Thomas Skjødeberg

    2012-01-01

    Novel parametric finite-element models are provided for discrete SMD capacitors and inductors in the frequency range 100 MHz to 4 GHz. The aim of the models is to facilitate performance optimization and analysis of RF PCB designs integrating these SMD components with layout geometries such as ant......Novel parametric finite-element models are provided for discrete SMD capacitors and inductors in the frequency range 100 MHz to 4 GHz. The aim of the models is to facilitate performance optimization and analysis of RF PCB designs integrating these SMD components with layout geometries...... such as antennas and PCB traces. The models presented are benchmarked against real-life measurements and conventional circuit models. Furthermore, two example parallel-resonance circuits are designed based on interpolation of the results and validated by measurements in order to demonstrate the accuracy...

  7. SASSYS-1 balance-of-plant component models for an integrated plant response

    International Nuclear Information System (INIS)

    Ku, J.-Y.

    1989-01-01

    Models of power plant heat transfer components and rotating machinery have been added to the balance-of-plant model in the SASSYS-1 liquid metal reactor systems analysis code. This work is part of a continuing effort in plant network simulation based on the general mathematical models developed. The models described in this paper extend the scope of the balance-of-plant model to handle non-adiabatic conditions along flow paths. While the mass and momentum equations remain the same, the energy equation now contains a heat source term due to energy transfer across the flow boundary or to work done through a shaft. The heat source term is treated fully explicitly. In addition, the equation of state is rewritten in terms of the quality and separate parameters for each phase. The models are simple enough to run quickly, yet include sufficient detail of dominant plant component characteristics to provide accurate results. 5 refs., 16 figs., 2 tabs

  8. Experimental validation of models applicable to the ultrasonic inspection of nuclear components

    International Nuclear Information System (INIS)

    Newberry, B.P.; Margetan, F.J.; Thompson, R.B.

    1988-01-01

    Two models were evaluated for their accuracy in predicting the results of proposed ultrasonic inspection techniques. The first was the Gauss-Hermite beam model for simulating the evolution of ultrasonic field patterns as the beam propagates from the transducer into a component. The second was a crack scattering model designed to predict the ultrasonic inspection response of a branched crack as an idealization of an intergranular stress corrosion crack. Beam profiles and crack responses are plotted

  9. The Peculiarities of Identifying the Components of a Business Model of Restaurant Industry Enterprise

    OpenAIRE

    Grosul Victoria A.; Ivanova Tatyana P.

    2017-01-01

    The article substantiates the need for elaborating an efficient business model, implementation of which would enable enterprises of restaurant industry to create sustainable competitive advantages and would contribute to successful development in the long term. The basic scientific approaches to defining the business model components have been allocated. The main emphases and standard elements of a business model of enterprise in terms of each of the scientific approaches have been defined. T...

  10. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  11. Development of the interactive model between Component Cooling Water System and Containment Cooling System using GOTHIC

    International Nuclear Information System (INIS)

    Byun, Choong Sup; Song, Dong Soo; Jun, Hwang Yong

    2006-01-01

    In a design point of view, component cooling water (CCW) system is not full-interactively designed with its heat loads. Heat loads are calculated from the CCW design flow and temperature condition which is determined with conservatism. Then the CCW heat exchanger is sized by using total maximized heat loads from above calculation. This approach does not give the optimized performance results and the exact trends of CCW system and the loads during transient. Therefore a combined model for performance analysis of containment and the component cooling water(CCW) system is developed by using GOTHIC software code. The model is verified by using the design parameters of component cooling water heat exchanger and the heat loads during the recirculation mode of loss of coolant accident scenario. This model may be used for calculating the realistic containment response and CCW performance, and increasing the ultimate heat sink temperature limits

  12. Developing interpretable models with optimized set reduction for identifying high risk software components

    Science.gov (United States)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  13. Muscle artifact suppression using independent-component analysis and state-space modeling.

    Science.gov (United States)

    Santillán-Guzmán, Alina; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas

    2012-01-01

    In this paper, we aim at suppressing the muscle artifacts present in electroencephalographic (EEG) signals with a technique based on a combination of Independent Component Analysis (ICA) and State-Space Modeling (SSM). The novel algorithm uses ICA to provide an initial model for SSM which is further optimized by the maximum-likelihood approach. This model is fitted to artifact-free data. Then it is applied to data with muscle artifacts. The state space is augmented by extracting additional components from the data prediction errors. The muscle artifacts are well separated in the additional components and, hence, a suppression of them can be performed. The proposed algorithm is demonstrated by application to a clinical epilepsy EEG data set.

  14. Use of 137Cs as a tracer of low-altitude transport of re-suspended aeroso from the African continent

    International Nuclear Information System (INIS)

    Hernandez, F.; Karlsson, L.; Alonso-Perez, S.; Rodriguez, S.; Lopez-Perez, M.; Cuevas, E.; Hernandez-Armas, J.

    2008-01-01

    Northern Africa and the Arabian Peninsula are known sources of dust particles, which, upon certain meteorological conditions, may be transported over long distances (e.g. Goudie and Middleton, 2001). The island of Tenerife is situated approximately 200 km off the coast of Morocco. It is often affected by atmospheric dust intrusions from the African Continent produced by the suspension of aerosol particulate. These atmospheric events produce important increments of PM 10 matter concentrations in the air above the island. In this study, we have analysed the time series of 137 Cs and PM 10 matter recorded in Tenerife, during 2000-2006, at the marine boundary layer (MBL) to test the possible usefulness of the mentioned radiotracers as markers of dust intrusions of African origin. The analysis was supported with results obtained from the HYSPLIT (Hybrid Single-Particle Lagrangian Integrated Trajectory) 4.0 dispersion model and the DREAM (Dust Regional Atmospheric Model) model (author)(tk)

  15. Morning surge in blood pressure using a random-effects multiple-component cosinor model.

    Science.gov (United States)

    Madden, J M; Browne, L D; Li, X; Kearney, P M; Fitzgerald, A P

    2018-01-29

    Blood pressure (BP) fluctuates throughout the day. The pattern it follows represents one of the most important circadian rhythms in the human body. For example, morning BP surge has been suggested as a potential risk factor for cardiovascular events occurring in the morning, but the accurate quantification of this phenomenon remains a challenge. Here, we outline a novel method to quantify morning surge. We demonstrate how the most commonly used method to model 24-hour BP, the single cosinor approach, can be extended to a multiple-component cosinor random-effects model. We outline how this model can be used to obtain a measure of morning BP surge by obtaining derivatives of the model fit. The model is compared with a functional principal component analysis that determines the main components of variability in the data. Data from the Mitchelstown Study, a population-based study of Irish adults (n = 2047), were used where a subsample (1207) underwent 24-hour ambulatory blood pressure monitoring. We demonstrate that our 2-component model provided a significant improvement in fit compared with a single model and a similar fit to a more complex model captured by b-splines using functional principal component analysis. The estimate of the average maximum slope was 2.857 mmHg/30 min (bootstrap estimates; 95% CI: 2.855-2.858 mmHg/30 min). Simulation results allowed us to quantify the between-individual SD in maximum slopes, which was 1.02 mmHg/30 min. By obtaining derivatives we have demonstrated a novel approach to quantify morning BP surge and its variation between individuals. This is the first demonstration of cosinor approach to obtain a measure of morning surge. Copyright © 2018 John Wiley & Sons, Ltd.

  16. COMPONENT SUPPLY MODEL FOR REPAIR ACTIVITIES NETWORK UNDER CONDITIONS OF PROBABILISTIC INDEFINITENESS.

    Directory of Open Access Journals (Sweden)

    Victor Yurievich Stroganov

    2017-02-01

    Full Text Available This article contains the systematization of the major production functions of repair activities network and the list of planning and control functions, which are described in the form of business processes (BP. Simulation model for analysis of the delivery effectiveness of components under conditions of probabilistic uncertainty was proposed. It has been shown that a significant portion of the total number of business processes is represented by the management and planning of the parts and components movement. Questions of construction of experimental design techniques on the simulation model in the conditions of non-stationarity were considered.

  17. Domain Walls and Textured Vortices in a Two-Component Ginzburg-Landau Model

    DEFF Research Database (Denmark)

    Madsen, Søren Peder; Gaididei, Yu. B.; Christiansen, Peter Leth

    2005-01-01

    We look for domain wall and textured vortex solutions in a two-component Ginzburg-Landau model inspired by two-band superconductivity. The two-dimensional two-component model, with equal coherence lengths and no magnetic field, shows some interesting properties. In the absence of a Josephson type...... coupling between the two order parameters a ''textured vortex'' is found by analytical and numerical solution of the Ginzburg-Landau equations. With a Josephson type coupling between the two order parameters we find the system to split up in two domains separated by a domain wall, where the order parameter...

  18. Towards a Complete Model for Software Component Deployment on Heterogeneous Platform

    Directory of Open Access Journals (Sweden)

    Švogor Ivan

    2014-12-01

    Full Text Available This report briefly describes an ongoing research related to optimization of allocating software components to heterogeneous computing platform (which includes CPU, GPU and FPGA. Research goal is also presented, along with current hot topics of the research area, related research teams, and finally results and contribution of my research. It involves mathematical modelling which results in goal function, optimization method which finds a suboptimal solution to the goal function and a software modeling tool which enables graphical representation of the problem at hand and help developers determine component placement in the system design phase.

  19. Optics Elements for Modeling Electrostatic Lenses and Accelerator Components: III. Electrostatic Deflectors

    International Nuclear Information System (INIS)

    Brown, T.A.; Gillespie, G.H.

    1999-01-01

    Ion-beam optics models for simulating electrostatic prisms (deflectors) of different geometries have been developed for the computer code TRACE 3-D. TRACE 3-D is an envelope (matrix) code, which includes a linear space charge model, that was originally developed to model bunched beams in magnetic transport systems and radiofrequency (RF) accelerators. Several new optical models for a number of electrostatic lenses and accelerator columns have been developed recently that allow the code to be used for modeling beamlines and accelerators with electrostatic components. The new models include a number of options for: (1) Einzel lenses, (2) accelerator columns, (3) electrostatic prisms, and (4) electrostatic quadrupoles. A prescription for setting up the initial beam appropriate to modeling 2-D (continuous) beams has also been developed. The models for electrostatic prisms are described in this paper. The electrostatic prism model options allow the modeling of cylindrical, spherical, and toroidal electrostatic deflectors. The application of these models in the development of ion-beam transport systems is illustrated through the modeling of a spherical electrostatic analyzer as a component of the new low energy beamline at CAMS

  20. Common and Critical Components Among Community Health Assessment and Community Health Improvement Planning Models.

    Science.gov (United States)

    Pennel, Cara L; Burdine, James N; Prochaska, John D; McLeroy, Kenneth R

    Community health assessment and community health improvement planning are continuous, systematic processes for assessing and addressing health needs in a community. Since there are different models to guide assessment and planning, as well as a variety of organizations and agencies that carry out these activities, there may be confusion in choosing among approaches. By examining the various components of the different assessment and planning models, we are able to identify areas for coordination, ways to maximize collaboration, and strategies to further improve community health. We identified 11 common assessment and planning components across 18 models and requirements, with a particular focus on health department, health system, and hospital models and requirements. These common components included preplanning; developing partnerships; developing vision and scope; collecting, analyzing, and interpreting data; identifying community assets; identifying priorities; developing and implementing an intervention plan; developing and implementing an evaluation plan; communicating and receiving feedback on the assessment findings and/or the plan; planning for sustainability; and celebrating success. Within several of these components, we discuss characteristics that are critical to improving community health. Practice implications include better understanding of different models and requirements by health departments, hospitals, and others involved in assessment and planning to improve cross-sector collaboration, collective impact, and community health. In addition, federal and state policy and accreditation requirements may be revised or implemented to better facilitate assessment and planning collaboration between health departments, hospitals, and others for the purpose of improving community health.

  1. Mixture Statistical Distribution Based Multiple Component Model for Target Detection in High Resolution SAR Imagery

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-11-01

    Full Text Available This paper proposes an innovative Mixture Statistical Distribution Based Multiple Component (MSDMC model for target detection in high spatial resolution Synthetic Aperture Radar (SAR images. Traditional detection algorithms usually ignore the spatial relationship among the target’s components. In the presented method, however, both the structural information and the statistical distribution are considered to better recognize the target. Firstly, the method based on compressed sensing reconstruction is used to recover the SAR image. Then, the multiple component model composed of a root filter and some corresponding part filters is applied to describe the structural information of the target. In the following step, mixture statistical distributions are utilised to discriminate the target from the background, and the Method of Logarithmic Cumulants (MoLC based Expectation Maximization (EM approach is adopted to estimate the parameters of the mixture statistical distribution model, which will be finally merged into the proposed MSDMC framework together with the multiple component model. In the experiment, the aeroplanes and the electrical power towers in TerraSAR-X SAR images are detected at three spatial resolutions. The results indicate that the presented MSDMC Model has potential for improving the detection performance compared with the state-of-the-art SAR target detection methods.

  2. A new model for predicting thermodynamic properties of ternary metallic solution from binary components

    International Nuclear Information System (INIS)

    Fang Zheng; Zhang Quanru

    2006-01-01

    A model has been derived to predict thermodynamic properties of ternary metallic systems from those of its three binaries. In the model, the excess Gibbs free energies and the interaction parameter ω 123 for three components of a ternary are expressed as a simple sum of those of the three sub-binaries, and the mole fractions of the components of the ternary are identical with the sub-binaries. This model is greatly simplified compared with the current symmetrical and asymmetrical models. It is able to overcome some shortcomings of the current models, such as the arrangement of the components in the Gibbs triangle, the conversion of mole fractions between ternary and corresponding binaries, and some necessary processes for optimizing the various parameters of these models. Two ternary systems, Mg-Cu-Ni and Cd-Bi-Pb are recalculated to demonstrate the validity and precision of the present model. The calculated results on the Mg-Cu-Ni system are better than those in the literature. New parameters in the Margules equations expressing the excess Gibbs free energies of three binary systems of the Cd-Bi-Pb ternary system are also given

  3. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    Energy Technology Data Exchange (ETDEWEB)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  4. Photonic Beamformer Model Based on Analog Fiber-Optic Links’ Components

    Science.gov (United States)

    Volkov, V. A.; Gordeev, D. A.; Ivanov, S. I.; Lavrov, A. P.; Saenko, I. I.

    2016-08-01

    The model of photonic beamformer for wideband microwave phased array antenna is investigated. The main features of the photonic beamformer model based on true-time-delay technique, DWDM technology and fiber chromatic dispersion are briefly analyzed. The performance characteristics of the key components of photonic beamformer for phased array antenna in the receive mode are examined. The beamformer model composed of the components available on the market of fiber-optic analog communication links is designed and tentatively investigated. Experimental demonstration of the designed model beamforming features includes actual measurement of 5-element microwave linear array antenna far-field patterns in 6-16 GHz frequency range for antenna pattern steering up to 40°. The results of experimental testing show good accordance with the calculation estimates.

  5. Development of pure component property models for chemical product-process design and analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol Shivajirao

    statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of pure components taken from......Property prediction models based on the group-contribution+ (GC+) approach have been developed to provide reliable predictions of pure component properties together with uncertainties of predicted property values which is much needed information in performing chemical product and process design...... and analysis of sustainable chemical processes. For developing property models, a systematic methodology for property modeling and uncertainty analysis is employed. The methodology includes a parameter estimation step to determine parameters of the property model and an uncertainty analysis step to establish...

  6. Modelling Earth's surface topography: decomposition of the static and dynamic components

    DEFF Research Database (Denmark)

    Guerri, Mattia; Cammarano, Fabio; Tackley, Paul J.

    2016-01-01

    topography maps and perform instantaneous mantle flow modelling to calculate the dynamic topography. We explore the effects of proposed mantle 1-D viscosities and also test a 3D pressure- and temperature-dependent viscosity model. We find that the patterns of residual and dynamic topography are robust.......19). The correlation slightly improves when considering only the very long-wavelength components of the maps (average = ∼0.23). We therefore conclude that a robust determination of dynamic topography is not feasible since current uncertainties affecting crustal density, mantle density and mantle viscosity are still......Contrasting results on the magnitude of the dynamic component of topography motivate us to analyse the sources of uncertainties affecting long wavelength topography modelling. We obtain a range of mantle density structures from thermo-chemical interpretation of available seismic tomography models...

  7. Statistical intercomparison of global climate models: A common principal component approach with application to GCM data

    International Nuclear Information System (INIS)

    Sengupta, S.K.; Boyle, J.S.

    1993-05-01

    Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution

  8. Verification of the component accuracy prediction obtained by physical modelling and the elastic simulation of the die/component interaction

    DEFF Research Database (Denmark)

    Ravn, Bjarne Gottlieb; Andersen, Claus Bo; Wanheim, Tarras

    2001-01-01

    There are three demands on a component that must undergo a die-cavity elasticity analysis. The demands to the product are specified as: (i) to be able to measure the loading profile which results in elestic die-cavity deflections; (ii) to be able to compute the elestic deflections using FE; (iii...

  9. Specification and Generation of Environment for Model Checking of Software Components

    Czech Academy of Sciences Publication Activity Database

    Pařízek, P.; Plášil, František

    2007-01-01

    Roč. 176, - (2007), s. 143-154 ISSN 1571-0661 R&D Projects: GA AV ČR 1ET400300504 Institutional research plan: CEZ:AV0Z10300504 Keywords : software components * behavior protocols * model checking * automated generation of environment Subject RIV: JC - Computer Hardware ; Software

  10. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  11. A model for determining condition-based maintenance policies for deteriorating multi-component systems

    NARCIS (Netherlands)

    Hontelez, J.A.M.; Wijnmalen, D.J.D.

    1993-01-01

    We discuss a method to determine strategies for preventive maintenance of systems consisting of gradually deteriorating components. A model has been developed to compute not only the range of conditions inducing a repair action, but also inspection moments based on the last known condition value so

  12. A Bayesian analysis of the PPP puzzle using an unobserved components model

    NARCIS (Netherlands)

    R.H. Kleijn (Richard); H.K. van Dijk (Herman)

    2001-01-01

    textabstractThe failure to describe the time series behaviour of most real exchange rates as temporary deviations from fixed long-term means may be due to time variation of the equilibria themselves, see Engel (2000). We implement this idea using an unobserved components model and decompose the

  13. Component simulation in problems of calculated model formation of automatic machine mechanisms

    OpenAIRE

    Telegin Igor; Kozlov Alexander; Zhirkov Alexander

    2017-01-01

    The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gap...

  14. Around power law for PageRank components in Buckley-Osthus model of web graph

    OpenAIRE

    Gasnikov, Alexander; Zhukovskii, Maxim; Kim, Sergey; Noskov, Fedor; Plaunov, Stepan; Smirnov, Daniil

    2017-01-01

    In the paper we investigate power law for PageRank components for the Buckley-Osthus model for web graph. We compare different numerical methods for PageRank calculation. With the best method we do a lot of numerical experiments. These experiments confirm the hypothesis about power law. At the end we discuss real model of web-ranking based on the classical PageRank approach.

  15. Swelling in light water reactor internal components: Insights from computational modeling

    Energy Technology Data Exchange (ETDEWEB)

    Stoller, Roger E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Barashev, Alexander V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States); Golubov, Stanislav I. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-01

    A modern cluster dynamics model has been used to investigate the materials and irradiation parameters that control microstructural evolution under the relatively low-temperature exposure conditions that are representative of the operating environment for in-core light water reactor components. The focus is on components fabricated from austenitic stainless steel. The model accounts for the synergistic interaction between radiation-produced vacancies and the helium that is produced by nuclear transmutation reactions. Cavity nucleation rates are shown to be relatively high in this temperature regime (275 to 325°C), but are sensitive to assumptions about the fine scale microstructure produced under low-temperature irradiation. The cavity nucleation rates observed run counter to the expectation that void swelling would not occur under these conditions. This expectation was based on previous research on void swelling in austenitic steels in fast reactors. This misleading impression arose primarily from an absence of relevant data. The results of the computational modeling are generally consistent with recent data obtained by examining ex-service components. However, it has been shown that the sensitivity of the model s predictions of low-temperature swelling behavior to assumptions about the primary damage source term and specification of the mean-field sink strengths is somewhat greater that that observed at higher temperatures. Further assessment of the mathematical model is underway to meet the long-term objective of this research, which is to provide a predictive model of void swelling at relevant lifetime exposures to support extended reactor operations.

  16. Spreadsheet modeling of optimal maintenance schedule for components in wear-out phase

    International Nuclear Information System (INIS)

    Artana, K.B.; Ishida, K.

    2002-01-01

    This paper addresses a method for determining the optimum maintenance schedule for components in the wear-out phase. The interval between maintenance for the components is optimized by minimizing the total cost. This consists of maintenance cost, operational cost, downtime cost and penalty cost. A decision to replace a component must also be taken when a component cannot attain the minimum reliability and availability index requirement. Premium solver platform, a spreadsheet-modeling tool, is utilized to model the optimization problem. Constraints, which are the considerations to be fulfilled, become the director of this process. A minimum and a maximum value are set on each constraint so as to give the working area of the optimization process. The optimization process investigates n-equally spaced maintenance at an interval of Tr. The increase in operational and maintenance costs due to the deterioration of the components is taken into account. This paper also performs a case study and sensitivity analysis on a liquid ring primer of a ship's bilge system

  17. Abrupt strategy change underlies gradual performance change: Bayesian hierarchical models of component and aggregate strategy use.

    Science.gov (United States)

    Wynton, Sarah K A; Anglim, Jeromy

    2017-10-01

    While researchers have often sought to understand the learning curve in terms of multiple component processes, few studies have measured and mathematically modeled these processes on a complex task. In particular, there remains a need to reconcile how abrupt changes in strategy use can co-occur with gradual changes in task completion time. Thus, the current study aimed to assess the degree to which strategy change was abrupt or gradual, and whether strategy aggregation could partially explain gradual performance change. It also aimed to show how Bayesian methods could be used to model the effect of practice on strategy use. To achieve these aims, 162 participants completed 15 blocks of practice on a complex computer-based task-the Wynton-Anglim booking (WAB) task. The task allowed for multiple component strategies (i.e., memory retrieval, information reduction, and insight) that could also be aggregated to a global measure of strategy use. Bayesian hierarchical models were used to compare abrupt and gradual functions of component and aggregate strategy use. Task completion time was well-modeled by a power function, and global strategy use explained substantial variance in performance. Change in component strategy use tended to be abrupt, whereas change in global strategy use was gradual and well-modeled by a power function. Thus, differential timing of component strategy shifts leads to gradual changes in overall strategy efficiency, and this provides one reason for why smooth learning curves can co-occur with abrupt changes in strategy use. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  19. Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as data-source methods for terrain covered in low vegetation

    Science.gov (United States)

    Gruszczyński, Wojciech; Matwij, Wojciech; Ćwiąkała, Paweł

    2017-04-01

    This article juxtaposes results from an unmanned aerial vehicle (UAV) and a terrestrial laser scanning (TLS) survey conducted to determine land relief. The determination of terrain relief is a task that requires precision in order to, for example, map natural and anthropogenic uplifts and subsidences of the land surface. One of the problems encountered when using either method to determine relief is the impact of any vegetation covering the given site on the determination of the height of the site's surface. In the discussed case, the site was covered mostly in low vegetation (grass). In one part, it had been mowed, whereas in the other it was 30-40 cm high. An attempt was made to filter point clouds in such a way as to leave only those points that represented the land surface and to eliminate those whose height was substantially affected by the surrounding vegetation. The reference land surface was determined from dense measurements obtained by means of a tacheometer and a rod-mounted reflector. This method ensures that the impact of vegetation is minimized. A comparison of the obtained accuracy levels, costs and effort related to each method leads to the conclusion that it is more efficient to use UAV than to use TLS for dense land relief modeling.

  20. The Tripartite Model of Risk Perception (TRIRISK): Distinguishing Deliberative, Affective, and Experiential Components of Perceived Risk.

    Science.gov (United States)

    Ferrer, Rebecca A; Klein, William M P; Persoskie, Alexander; Avishai-Yitshak, Aya; Sheeran, Paschal

    2016-10-01

    Although risk perception is a key predictor in health behavior theories, current conceptions of risk comprise only one (deliberative) or two (deliberative vs. affective/experiential) dimensions. This research tested a tripartite model that distinguishes among deliberative, affective, and experiential components of risk perception. In two studies, and in relation to three common diseases (cancer, heart disease, diabetes), we used confirmatory factor analyses to examine the factor structure of the tripartite risk perception (TRIRISK) model and compared the fit of the TRIRISK model to dual-factor and single-factor models. In a third study, we assessed concurrent validity by examining the impact of cancer diagnosis on (a) levels of deliberative, affective, and experiential risk perception, and (b) the strength of relations among risk components, and tested predictive validity by assessing relations with behavioral intentions to prevent cancer. The tripartite factor structure was supported, producing better model fit across diseases (studies 1 and 2). Inter-correlations among the components were significantly smaller among participants who had been diagnosed with cancer, suggesting that affected populations make finer-grained distinctions among risk perceptions (study 3). Moreover, all three risk perception components predicted unique variance in intentions to engage in preventive behavior (study 3). The TRIRISK model offers both a novel conceptualization of health-related risk perceptions, and new measures that enhance predictive validity beyond that engendered by unidimensional and bidimensional models. The present findings have implications for the ways in which risk perceptions are targeted in health behavior change interventions, health communications, and decision aids.

  1. Computational models for residual creep life prediction of power plant components

    International Nuclear Information System (INIS)

    Grewal, G.S.; Singh, A.K.; Ramamoortry, M.

    2006-01-01

    All high temperature - high pressure power plant components are prone to irreversible visco-plastic deformation by the phenomenon of creep. The steady state creep response as well as the total creep life of a material is related to the operational component temperature through, respectively, the exponential and inverse exponential relationships. Minor increases in the component temperature can thus have serious consequences as far as the creep life and dimensional stability of a plant component are concerned. In high temperature steam tubing in power plants, one mechanism by which a significant temperature rise can occur is by the growth of a thermally insulating oxide film on its steam side surface. In the present paper, an elegantly simple and computationally efficient technique is presented for predicting the residual creep life of steel components subjected to continual steam side oxide film growth. Similarly, fabrication of high temperature power plant components involves extensive use of welding as the fabrication process of choice. Naturally, issues related to the creep life of weldments have to be seriously addressed for safe and continual operation of the welded plant component. Unfortunately, a typical weldment in an engineering structure is a zone of complex microstructural gradation comprising of a number of distinct sub-zones with distinct meso-scale and micro-scale morphology of the phases and (even) chemistry and its creep life prediction presents considerable challenges. The present paper presents a stochastic algorithm, which can be' used for developing experimental creep-cavitation intensity versus residual life correlations for welded structures. Apart from estimates of the residual life in a mean field sense, the model can be used for predicting the reliability of the plant component in a rigorous probabilistic setting. (author)

  2. Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.

    Science.gov (United States)

    He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne

    2016-04-01

    Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more

  3. Pheno-Copter: A Low-Altitude, Autonomous Remote-Sensing Robotic Helicopter for High-Throughput Field-Based Phenotyping

    Directory of Open Access Journals (Sweden)

    Scott C. Chapman

    2014-06-01

    Full Text Available Plant breeding trials are extensive (100s to 1000s of plots and are difficult and expensive to monitor by conventional means, especially where measurements are time-sensitive. For example, in a land-based measure of canopy temperature (hand-held infrared thermometer at two to 10 plots per minute, the atmospheric conditions may change greatly during the time of measurement. Such sensors measure small spot samples (2 to 50 cm2, whereas image-based methods allow the sampling of entire plots (2 to 30 m2. A higher aerial position allows the rapid measurement of large numbers of plots if the altitude is low (10 to 40 m and the flight control is sufficiently precise to collect high-resolution images. This paper outlines the implementation of a customized robotic helicopter (gas-powered, 1.78-m rotor diameter with autonomous flight control and software to plan flights over experiments that were 0.5 to 3 ha in area and, then, to extract, straighten and characterize multiple experimental field plots from images taken by three cameras. With a capacity to carry 1.5 kg for 30 min or 1.1 kg for 60 min, the system successfully completed >150 flights for a total duration of 40 h. Example applications presented here are estimations of the variation in: ground cover in sorghum (early season; canopy temperature in sugarcane (mid-season; and three-dimensional measures of crop lodging in wheat (late season. Together with this hardware platform, improved software to automate the production of ortho-mosaics and digital elevation models and to extract plot data would further benefit the development of high-throughput field-based phenotyping systems.

  4. Evaluation of the Component Chemical Potentials in Analytical Models for Ordered Alloy Phases

    Directory of Open Access Journals (Sweden)

    W. A. Oates

    2011-01-01

    Full Text Available The component chemical potentials in models of solution phases with a fixed number of sites can be evaluated easily when the Helmholtz energy is known as an analytical function of composition. In the case of ordered phases, however, the situation is less straightforward, because the Helmholtz energy is a functional involving internal order parameters. Because of this, the chemical potentials are usually obtained numerically from the calculated integral Helmholtz energy. In this paper, we show how the component chemical potentials can be obtained analytically in ordered phases via the use of virtual cluster chemical potentials. Some examples are given which illustrate the simplicity of the method.

  5. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    Science.gov (United States)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  6. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  7. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  8. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach

    Science.gov (United States)

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research. PMID:27706185

  9. Finsler Geometry Modeling of Phase Separation in Multi-Component Membranes

    Directory of Open Access Journals (Sweden)

    Satoshi Usui

    2016-08-01

    Full Text Available A Finsler geometric surface model is studied as a coarse-grained model for membranes of three components, such as zwitterionic phospholipid (DOPC, lipid (DPPC and an organic molecule (cholesterol. To understand the phase separation of liquid-ordered (DPPC rich L o and liquid-disordered (DOPC rich L d , we introduce a binary variable σ ( = ± 1 into the triangulated surface model. We numerically determine that two circular and stripe domains appear on the surface. The dependence of the morphological change on the area fraction of L o is consistent with existing experimental results. This provides us with a clear understanding of the origin of the line tension energy, which has been used to understand these morphological changes in three-component membranes. In addition to these two circular and stripe domains, a raft-like domain and budding domain are also observed, and the several corresponding phase diagrams are obtained.

  10. Modeling and numerical simulation of multi-component flow in porous media

    International Nuclear Information System (INIS)

    Saad, B.

    2011-01-01

    This work deals with the modelization and numerical simulation of two phase multi-component flow in porous media. The study is divided into two parts. First we study and prove the mathematical existence in a weak sense of two degenerate parabolic systems modeling two phase (liquid and gas) two component (water and hydrogen) flow in porous media. In the first model, we assume that there is a local thermodynamic equilibrium between both phases of hydrogen by using the Henry's law. The second model consists of a relaxation of the previous model: the kinetic of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is no longer instantaneous. The second part is devoted to the numerical analysis of those models. Firstly, we propose a numerical scheme to compare numerical solutions obtained with the first model and numerical solutions obtained with the second model where the characteristic time to recover the thermodynamic equilibrium goes to zero. Secondly, we present a finite volume scheme with a phase-by-phase upstream weighting scheme without simplified assumptions on the state law of gas densities. We also validate this scheme on a 2D test cases. (author)

  11. Steady-state plant model to predict hydrogen levels in power plant components

    Science.gov (United States)

    Glatzmaier, Greg C.; Cable, Robert; Newmarker, Marc

    2017-06-01

    The National Renewable Energy Laboratory (NREL) and Acciona Energy North America developed a full-plant steady-state computational model that estimates levels of hydrogen in parabolic trough power plant components. The model estimated dissolved hydrogen concentrations in the circulating heat transfer fluid (HTF), and corresponding partial pressures within each component. Additionally for collector field receivers, the model estimated hydrogen pressure in the receiver annuli. The model was developed to estimate long-term equilibrium hydrogen levels in power plant components, and to predict the benefit of hydrogen mitigation strategies for commercial power plants. Specifically, the model predicted reductions in hydrogen levels within the circulating HTF that result from purging hydrogen from the power plant expansion tanks at a specified target rate. Our model predicted hydrogen partial pressures from 8.3 mbar to 9.6 mbar in the power plant components when no mitigation treatment was employed at the expansion tanks. Hydrogen pressures in the receiver annuli were 8.3 to 8.4 mbar. When hydrogen partial pressure was reduced to 0.001 mbar in the expansion tanks, hydrogen pressures in the receiver annuli fell to a range of 0.001 mbar to 0.02 mbar. When hydrogen partial pressure was reduced to 0.3 mbar in the expansion tanks, hydrogen pressures in the receiver annuli fell to a range of 0.25 mbar to 0.28 mbar. Our results show that controlling hydrogen partial pressure in the expansion tanks allows us to reduce and maintain hydrogen pressures in the receiver annuli to any practical level.

  12. Steady-State Plant Model to Predict Hydroden Levels in Power Plant Components

    Energy Technology Data Exchange (ETDEWEB)

    Glatzmaier, Greg C.; Cable, Robert; Newmarker, Marc

    2017-06-27

    The National Renewable Energy Laboratory (NREL) and Acciona Energy North America developed a full-plant steady-state computational model that estimates levels of hydrogen in parabolic trough power plant components. The model estimated dissolved hydrogen concentrations in the circulating heat transfer fluid (HTF), and corresponding partial pressures within each component. Additionally for collector field receivers, the model estimated hydrogen pressure in the receiver annuli. The model was developed to estimate long-term equilibrium hydrogen levels in power plant components, and to predict the benefit of hydrogen mitigation strategies for commercial power plants. Specifically, the model predicted reductions in hydrogen levels within the circulating HTF that result from purging hydrogen from the power plant expansion tanks at a specified target rate. Our model predicted hydrogen partial pressures from 8.3 mbar to 9.6 mbar in the power plant components when no mitigation treatment was employed at the expansion tanks. Hydrogen pressures in the receiver annuli were 8.3 to 8.4 mbar. When hydrogen partial pressure was reduced to 0.001 mbar in the expansion tanks, hydrogen pressures in the receiver annuli fell to a range of 0.001 mbar to 0.02 mbar. When hydrogen partial pressure was reduced to 0.3 mbar in the expansion tanks, hydrogen pressures in the receiver annuli fell to a range of 0.25 mbar to 0.28 mbar. Our results show that controlling hydrogen partial pressure in the expansion tanks allows us to reduce and maintain hydrogen pressures in the receiver annuli to any practical level.

  13. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  14. Mapping the Two-Component Atomic Fermi Gas to the Nuclear Shell-Model

    DEFF Research Database (Denmark)

    Özen, C.; Zinner, Nikolaj Thomas

    2014-01-01

    of the external potential becomes important. A system of two-species fermionic cold atoms with an attractive zero-range interaction is analogous to a simple model of nucleus in which neutrons and protons interact only through a residual pairing interaction. In this article, we discuss how the problem of a two......-component atomic fermi gas in a tight external trap can be mapped to the nuclear shell model so that readily available many-body techniques in nuclear physics, such as the Shell Model Monte Carlo (SMMC) method, can be directly applied to the study of these systems. We demonstrate an application of the SMMC method...

  15. Mathematical models of seismics in composite media: elastic and poro-elastic components

    Directory of Open Access Journals (Sweden)

    Anvarbek Meirmanov

    2016-07-01

    Full Text Available In the present paper we consider elastic and poroelastic media having a common interface. We derive the macroscopic mathematical models for seismic wave propagation through these two different media as a homogenization of the exact mathematical model at the microscopic level. They consist of seismic equations for each component and boundary conditions at the common interface, which separates different media. To do this we use the two-scale expansion method in the corresponding integral identities, defining the weak solution. We illustrate our results with the numerical implementations of the inverse problem for the simplest model.

  16. Design logistics performance measurement model of automotive component industry for srengthening competitiveness of dealing AEC 2015

    Science.gov (United States)

    Amran, T. G.; Janitra Yose, Mindy

    2018-03-01

    As the free trade Asean Economic Community (AEC) causes the tougher competition, it is important that Indonesia’s automotive industry have high competitiveness as well. A model of logistics performance measurement was designed as an evaluation tool for automotive component companies to improve their logistics performance in order to compete in AEC. The design of logistics performance measurement model was based on the Logistics Scorecard perspectives, divided into two stages: identifying the logistics business strategy to get the KPI and arranging the model. 23 KPI was obtained. The measurement result can be taken into consideration of determining policies to improve the performance logistics competitiveness.

  17. Femoral Component External Rotation Affects Knee Biomechanics: A Computational Model of Posterior-stabilized TKA.

    Science.gov (United States)

    Kia, Mohammad; Wright, Timothy M; Cross, Michael B; Mayman, David J; Pearle, Andrew D; Sculco, Peter K; Westrich, Geoffrey H; Imhauser, Carl W

    2018-01-01

    The correct amount of external rotation of the femoral component during TKA is controversial because the resulting changes in biomechanical knee function associated with varying degrees of femoral component rotation are not well understood. We addressed this question using a computational model, which allowed us to isolate the biomechanical impact of geometric factors including bony shapes, location of ligament insertions, and implant size across three different knees after posterior-stabilized (PS) TKA. Using a computational model of the tibiofemoral joint, we asked: (1) Does external rotation unload the medial collateral ligament (MCL) and what is the effect on lateral collateral ligament tension? (2) How does external rotation alter tibiofemoral contact loads and kinematics? (3) Does 3° external rotation relative to the posterior condylar axis align the component to the surgical transepicondylar axis (sTEA) and what anatomic factors of the femoral condyle explain variations in maximum MCL tension among knees? We incorporated a PS TKA into a previously developed computational knee model applied to three neutrally aligned, nonarthritic, male cadaveric knees. The computational knee model was previously shown to corroborate coupled motions and ligament loading patterns of the native knee through a range of flexion. Implant geometries were virtually installed using hip-to-ankle CT scans through measured resection and anterior referencing surgical techniques. Collateral ligament properties were standardized across each knee model by defining stiffness and slack lengths based on the healthy population. The femoral component was externally rotated from 0° to 9° relative to the posterior condylar axis in 3° increments. At each increment, the knee was flexed under 500 N compression from 0° to 90° simulating an intraoperative examination. The computational model predicted collateral ligament forces, compartmental contact forces, and tibiofemoral internal/external and

  18. Mechanical properties of multifunctional structure with viscoelastic components based on FVE model

    Science.gov (United States)

    Hao, Dong; Zhang, Lin; Yu, Jing; Mao, Daiyong

    2018-02-01

    Based on the models of Lion and Kardelky (2004) and Hofer and Lion (2009), a finite viscoelastic (FVE) constitutive model, considering the predeformation-, frequency- and amplitude-dependent properties, has been proposed in our earlier paper [1]. FVE model is applied to investigating the dynamic characteristics of the multifunctional structure with the viscoelastic components. Combing FVE model with the finite element theory, the dynamic model of the multifunctional structure could be obtained. Additionally, the parametric identification and the experimental verification are also given via the frequency-sweep tests. The results show that the computational data agree well with the experimental data. FVE model has made a success of expressing the dynamic characteristics of the viscoelastic materials utilized in the multifunctional structure. The multifunctional structure technology has been verified by in-orbit experiments.

  19. Astronomical component estimation (ACE v.1) by time-variant sinusoidal modeling

    Science.gov (United States)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-09-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on (fast) Fourier transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic can make it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. This drawback is circumvented by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach was proven useful to characterize audio signals (music and speech), which are non-stationary in nature. Paleoclimate proxy signals and audio signals share similar dynamics; the only difference is the frequency relationship between the different components. A harmonic-frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, this difference is irrelevant for the problem of separating simultaneous changes in amplitude and frequency. Using an approach with overlapping analysis frames, the model (Astronomical Component Estimation, version 1: ACE v.1) captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretations, whereas the latter are estimated by means of linear least-squares. As output, the model provides the orbital component waveform, either in the depth or time domain. Uncertainty analyses of the model estimates are performed using Monte Carlo simulations. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns reconstruct changes in

  20. User's guide to the weather model: a component of the western spruce budworm modeling system.

    Science.gov (United States)

    W. P. Kemp; N. L. Crookston; P. W. Thomas

    1989-01-01

    A stochastic model useful in simulating daily maximum and minimum temperature and precipitation developed by Bruhn and others has been adapted for use in the western spruce budworm modeling system. This document describes how to use the weather model and illustrates some aspects of its behavior.

  1. Simplifying and upscaling water resources systems models that combine natural and engineered components

    Science.gov (United States)

    McIntyre, N.; Keir, G.

    2014-12-01

    Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.

  2. Blind Separation of Acoustic Signals Combining SIMO-Model-Based Independent Component Analysis and Binary Masking

    Directory of Open Access Journals (Sweden)

    Hiekata Takashi

    2006-01-01

    Full Text Available A new two-stage blind source separation (BSS method for convolutive mixtures of speech is proposed, in which a single-input multiple-output (SIMO-model-based independent component analysis (ICA and a new SIMO-model-based binary masking are combined. SIMO-model-based ICA enables us to separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources in their original form at the microphones. Thus, the separated signals of SIMO-model-based ICA can maintain the spatial qualities of each sound source. Owing to this attractive property, our novel SIMO-model-based binary masking can be applied to efficiently remove the residual interference components after SIMO-model-based ICA. The experimental results reveal that the separation performance can be considerably improved by the proposed method compared with that achieved by conventional BSS methods. In addition, the real-time implementation of the proposed BSS is illustrated.

  3. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  4. Resolution and Probabilistic Models of Components in CryoEM Maps of Mature P22 Bacteriophage

    Science.gov (United States)

    Pintilie, Grigore; Chen, Dong-Hua; Haase-Pettingell, Cameron A.; King, Jonathan A.; Chiu, Wah

    2016-01-01

    CryoEM continues to produce density maps of larger and more complex assemblies with multiple protein components of mixed symmetries. Resolution is not always uniform throughout a cryoEM map, and it can be useful to estimate the resolution in specific molecular components of a large assembly. In this study, we present procedures to 1) estimate the resolution in subcomponents by gold-standard Fourier shell correlation (FSC); 2) validate modeling procedures, particularly at medium resolutions, which can include loop modeling and flexible fitting; and 3) build probabilistic models that combine high-accuracy priors (such as crystallographic structures) with medium-resolution cryoEM densities. As an example, we apply these methods to new cryoEM maps of the mature bacteriophage P22, reconstructed without imposing icosahedral symmetry. Resolution estimates based on gold-standard FSC show the highest resolution in the coat region (7.6 Å), whereas other components are at slightly lower resolutions: portal (9.2 Å), hub (8.5 Å), tailspike (10.9 Å), and needle (10.5 Å). These differences are indicative of inherent structural heterogeneity and/or reconstruction accuracy in different subcomponents of the map. Probabilistic models for these subcomponents provide new insights, to our knowledge, and structural information when taking into account uncertainty given the limitations of the observed density. PMID:26743049

  5. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  6. Sensitivity analysis of key components in large-scale hydroeconomic models

    Science.gov (United States)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  7. The multi-component model of working memory: explorations in experimental cognitive psychology.

    Science.gov (United States)

    Repovs, G; Baddeley, A

    2006-04-28

    There are a number of ways one can hope to describe and explain cognitive abilities, each of them contributing a unique and valuable perspective. Cognitive psychology tries to develop and test functional accounts of cognitive systems that explain the capacities and properties of cognitive abilities as revealed by empirical data gathered by a range of behavioral experimental paradigms. Much of the research in the cognitive psychology of working memory has been strongly influenced by the multi-component model of working memory [Baddeley AD, Hitch GJ (1974) Working memory. In: Recent advances in learning and motivation, Vol. 8 (Bower GA, ed), pp 47-90. New York: Academic Press; Baddeley AD (1986) Working memory. Oxford, UK: Clarendon Press; Baddeley A. Working memory: Thought and action. Oxford: Oxford University Press, in press]. By expanding the notion of a passive short-term memory to an active system that provides the basis for complex cognitive abilities, the model has opened up numerous questions and new lines of research. In this paper we present the current revision of the multi-component model that encompasses a central executive, two unimodal storage systems: a phonological loop and a visuospatial sketchpad, and a further component, a multimodal store capable of integrating information into unitary episodic representations, termed episodic buffer. We review recent empirical data within experimental cognitive psychology that has shaped the development of the multicomponent model and the understanding of the capacities and properties of working memory. Research based largely on dual-task experimental designs and on neuropsychological evidence has yielded valuable information about the fractionation of working memory into independent stores and processes, the nature of representations in individual stores, the mechanisms of their maintenance and manipulation, the way the components of working memory relate to each other, and the role they play in other

  8. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    Directory of Open Access Journals (Sweden)

    Robert Jarosch

    2008-12-01

    Full Text Available This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit. Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation. Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with forceregulating sites for Ca2+ binding, the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments.

  9. A Modeling Framework to Investigate the Radial Component of the Pushrim Force in Manual Wheelchair Propulsion

    Directory of Open Access Journals (Sweden)

    Ackermann Marko

    2015-01-01

    Full Text Available The ratio of tangential to total pushrim force, the so-called Fraction Effective Force (FEF, has been used to evaluate wheelchair propulsion efficiency based on the fact that only the tangential component of the force on the pushrim contributes to actual wheelchair propulsion. Experimental studies, however, consistently show low FEF values and recent experimental as well as modelling investigations have conclusively shown that a more tangential pushrim force direction can lead to a decrease and not increase in propulsion efficiency. This study aims at quantifying the contributions of active, inertial and gravitational forces to the normal pushrim component. In order to achieve this goal, an inverse dynamics-based framework is proposed to estimate individual contributions to the pushrim forces using a model of the wheelchair-user system. The results show that the radial pushrim force component arise to a great extent due to purely mechanical effects, including inertial and gravitational forces. These results corroborate previous findings according to which radial pushrim force components are not necessarily a result of inefficient propulsion strategies or hand-rim friction requirements. This study proposes a novel framework to quantify the individual contributions of active, inertial and gravitational forces to pushrim forces during wheelchair propulsion.

  10. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2014-07-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  11. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-06-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  12. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    Science.gov (United States)

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  13. Low-Altitude Aerial Methane Concentration Mapping

    Directory of Open Access Journals (Sweden)

    Bara J. Emran

    2017-08-01

    Full Text Available Detection of leaks of fugitive greenhouse gases (GHGs from landfills and natural gas infrastructure is critical for not only their safe operation but also for protecting the environment. Current inspection practices involve moving a methane detector within the target area by a person or vehicle. This procedure is dangerous, time consuming, labor intensive and above all unavailable when access to the desired area is limited. Remote sensing by an unmanned aerial vehicle (UAV equipped with a methane detector is a cost-effective and fast method for methane detection and monitoring, especially for vast and remote areas. This paper describes the integration of an off-the-shelf laser-based methane detector into a multi-rotor UAV and demonstrates its efficacy in generating an aerial methane concentration map of a landfill. The UAV flies a preset flight path measuring methane concentrations in a vertical air column between the UAV and the ground surface. Measurements were taken at 10 Hz giving a typical distance between measurements of 0.2 m when flying at 2 m/s. The UAV was set to fly at 25 to 30 m above the ground. We conclude that besides its utility in landfill monitoring, the proposed method is ready for other environmental applications as well as the inspection of natural gas infrastructure that can release methane with much higher concentrations.

  14. The Research and Analysis on Failure Distribution Model of Diesel Engine Component Parts

    Directory of Open Access Journals (Sweden)

    Wang Shaokun

    2016-01-01

    Full Text Available Reliability research not only provides the direction of quality improvement for new product research and development, but also helps to get the product failure distribution model, so as to ensure the logistics organization to improve the service level. According to the reliability data from the database of research group, we analyzed the rule of failure distribution of high frequency fault component (fuel injection pump and built a Weibull model. The parameters of the model are estimated and solved by using the uniform linear method and the least square method. After solving the model, the density function curves and the failure rate curves are drawn, and then the model of the demand forecasting of spare parts based on the failure distribution is obtained.

  15. Identifying and validating the components of nursing practice models for long-term care facilities.

    Science.gov (United States)

    Mueller, Christine; Savik, Kay

    2010-10-01

    Nursing practice models (NPMs) provide the framework for the design and delivery of nursing care to residents in long-term care (LTC) facilities and characterize the manner in which nursing staff assemble to accomplish clinical goals. The purpose of this study was to identify and validate the distinctive components of NPMs in LTC facilities and develop an instrument to describe and evaluate NPMs in such settings. The study included validation of the NPM components through a literature review and focus groups with nursing staff from LTC facilities; development and modification of the Nursing Practice Model Questionnaire (NPMQ); and examination of the validity and reliability of the NPMQ through pilot testing in 15 LTC facilities with 508 nursing staff. Five factors--decision making, informal continuity of information, formal continuity of information, continuity of care, and accountability--comprise the five subscales of the NPMQ, a 37-item questionnaire with established respectable validity and reliability. Copyright 2010, SLACK Incorporated.

  16. Coaching Younger Practitioners and Students Using Components of the Co-Active Coaching Model

    OpenAIRE

    Tofade, Toyin

    2010-01-01

    Coaching is used to improve performance, achieve preset goals and obtain desired results. Several coaching models have been used in health professions for leadership and professional development. This article describes some components of Co-Active Coaching® that can be applied while coaching pharmacy students and younger practitioners. Co-Active Coaching requires the coach to use a broad range of communication skills, including listening, asking powerful questions, making insightful comments,...

  17. TWO-COMPONENT GALACTIC BULGE PROBED WITH RENEWED GALACTIC CHEMICAL EVOLUTION MODEL

    International Nuclear Information System (INIS)

    Tsujimoto, Takuji; Bekki, Kenji

    2012-01-01

    Results of recent observations of the Galactic bulge demand that we discard a simple picture of its formation, suggesting the presence of two stellar populations represented by two peaks of stellar metallicity distribution (MDF) in the bulge. To assess this issue, we construct Galactic chemical evolution models that have been updated in two respects: first, the delay time distribution of Type Ia supernovae (SNe Ia) recently revealed by extensive SN Ia surveys is incorporated into the models. Second, the nucleosynthesis clock, the s-processing in asymptotic giant branch stars, is carefully considered in this study. This novel model first shows that the Galaxy feature tagged by the key elements, Mg, Fe, and Ba, for the bulge as well as thin and thick disks is compatible with a short-delay SN Ia. We present a successful modeling of a two-component bulge including the MDF and the evolutions of [Mg/Fe] and [Ba/Mg], and reveal its origin as follows. A metal-poor component (([Fe/H]) ∼ –0.5) is formed with a relatively short timescale of ∼1 Gyr. These properties are identical to the thick disk's characteristics in the solar vicinity. Subsequently from its remaining gas mixed with a gas flow from the disk outside the bulge, a metal-rich component (([Fe/H]) ∼ +0.3) is formed with a longer timescale (∼4 Gyr) together with a top-heavy initial mass function that might be identified with the thin disk component within the bulge.

  18. Evaluation of low dose ionizing radiation effect on some blood components in animal model

    OpenAIRE

    El-Shanshoury, H.; El-Shanshoury, G.; Abaza, A.

    2016-01-01

    Exposure to ionizing radiation is known to have lethal effects in blood cells. It is predicted that an individual may spend days, weeks or even months in a radiation field without becoming alarmed. The study aimed to discuss the evaluation of low dose ionizing radiation (IR) effect on some blood components in animal model. Hematological parameters were determined for 110 animal rats (divided into 8 groups) pre- and post-irradiation. An attempt to explain the blood changes resulting from both ...

  19. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  20. Numerical Modelling of Multi-Phase Multi-Component Reactive Transport in the Earth's interior

    Science.gov (United States)

    Oliveira, Beñat; Afonso, Juan Carlos; Zlotnik, Sergio; Tilhac, Romain

    2017-04-01

    We present a conceptual and numerical approach to model processes in the Earth's interior that involve multiple phases that simultaneously interact thermally, mechanically and chemically. The approach is truly multiphase in the sense that each dynamic phase is explicitly modelled with an individual set of mass, momentum, energy and chemical mass balance equations coupled via interfacial interaction terms. It is also truly multi-component in the sense that the compositions of the system and its constituent thermodynamic phases are expressed by a full set of fundamental chemical components (e.g. SiO_2, Al_2O_3, MgO, etc) rather than proxies. In contrast to previous approaches these chemical components evolve, react with, and partition into, different phases with different physical properties according to an internally-consistent thermodynamic model. This enables a thermodynamically-consistent coupling of the governing set of balance equations. Interfacial processes such as surface tensions and/or surface energy contributions to the dynamics and energetics of the system are also taken into account. The model presented here describes the evolution of systems governed by Multi-Phase Multi-Component Reactive Transport (MPMCRT) based on Ensemble Averaging and Classical Irreversible Thermodynamics principles. This novel approach provides a flexible platform to study the dynamics and non-linear feedbacks occurring within various natural systems at different scales. This notably includes major- and trace-element transport, diffusion-controlled trace-element re-equilibration or rheological changes associated with melt generation and migration in the Earth's mantle.

  1. High Cost/High Risk Components to Chalcogenide Molded Lens Model: Molding Preforms and Mold Technology

    Energy Technology Data Exchange (ETDEWEB)

    Bernacki, Bruce E.

    2012-10-05

    This brief report contains a critique of two key components of FiveFocal's cost model for glass compression molding of chalcogenide lenses for infrared applications. Molding preforms and mold technology have the greatest influence on the ultimate cost of the product and help determine the volumes needed to select glass molding over conventional single-point diamond turning or grinding and polishing. This brief report highlights key areas of both technologies with recommendations for further study.

  2. Model Components of Mangrove Resources Management Based on Blue Economy Concept

    OpenAIRE

    Bidayani, Endang; Soemarno, Soemarno; Harahab, Nuddin; Rudianto, Rudianto

    2016-01-01

    This study was aimed at analyzing variables affecting mangrove resource conservation based on blue economy concept. The model component analysis applied Spearman Rank Correlation test. Result showed that Z-calc.was bigger than Z-tab. (1.64) at 95% confidence level, and therefore, Ho was rejected. This study concluded that resource efficiency, without wastes, social awareness, cyclic system of production, innovation and adaptation, and institution were blue economy concept-based variables. In ...

  3. The Internet addiction components model and personality: establishing construct validity via a nomological network

    OpenAIRE

    Kuss, DJ; Shorter, GW; Van Rooij, AJ; Van de Mheen, D; Griffiths, MD

    2014-01-01

    There is growing concern over excessive and sometimes problematic Internet use. Drawing upon the framework of the components model of addiction (Griffiths, 2005), Internet addiction appears as behavioural addiction characterised by the following symptoms: salience, withdrawal, tolerance, mood modification, relapse and conflict. A number of factors have been associated with an increased risk for Internet addiction, including personality traits. The overall aim of this study was to establish th...

  4. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, Stephen [Cleveland State Univ., Cleveland, OH (United States)

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  5. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    International Nuclear Information System (INIS)

    Duffy, Stephen

    2013-01-01

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  6. A Computational Model of Torque Generation: Neural, Contractile, Metabolic and Musculoskeletal Components

    Science.gov (United States)

    Callahan, Damien M.; Umberger, Brian R.; Kent-Braun, Jane A.

    2013-01-01

    The pathway of voluntary joint torque production includes motor neuron recruitment and rate-coding, sarcolemmal depolarization and calcium release by the sarcoplasmic reticulum, force generation by motor proteins within skeletal muscle, and force transmission by tendon across the joint. The direct source of energetic support for this process is ATP hydrolysis. It is possible to examine portions of this physiologic pathway using various in vivo and in vitro techniques, but an integrated view of the multiple processes that ultimately impact joint torque remains elusive. To address this gap, we present a comprehensive computational model of the combined neuromuscular and musculoskeletal systems that includes novel components related to intracellular bioenergetics function. Components representing excitatory drive, muscle activation, force generation, metabolic perturbations, and torque production during voluntary human ankle dorsiflexion were constructed, using a combination of experimentally-derived data and literature values. Simulation results were validated by comparison with torque and metabolic data obtained in vivo. The model successfully predicted peak and submaximal voluntary and electrically-elicited torque output, and accurately simulated the metabolic perturbations associated with voluntary contractions. This novel, comprehensive model could be used to better understand impact of global effectors such as age and disease on various components of the neuromuscular system, and ultimately, voluntary torque output. PMID:23405245

  7. Complex relation among Health Belief Model components in TB prevention and care.

    Science.gov (United States)

    Li, Z T; Yang, S S; Zhang, X X; Fisher, E B; Tian, B C; Sun, X Y

    2015-07-01

    This study aims to explore the relationships among components of the Health Belief Model, tuberculosis (TB) preventive behavior, and intention of seeking TB care. Cross section study. Using convenience sampling, 1154 rural-to-urban migrant workers were selected between the ages of 18-50 years in six urban areas of three provinces in China. The survey was conducted by individual, face-to-face interviews with a standardized questionnaire. Lisrel 8.7 was used to conduct path analysis. The knowledge and benefits components of the Health Belief Model predicted preventive behaviors: cover nose/mouth when coughing or sneezing (β = 0.24, 0.33 respectively), evade others' coughs (β = 0.13, 0.25) and also predicted seeking TB care (β = 0.27, 0.19). Susceptibility and severity also predicted seeking TB care (β = 0.12, 0.16). There were also important relationships among model components. Knowledge of TB predicted both susceptibility (β = 0.32-0.60) and severity (β = 0.41-0.45). Further, each of susceptibility (β = 0.30) and severity (β = 0.41) predicted perceived benefits of preventive care. Thus, a path from knowledge, through severity and susceptibility, and then through benefits predicted prevention and TB care seeking behaviors. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  8. [Fundamentally study on mathematical kinetic model of component extraction from FTCM].

    Science.gov (United States)

    He, Fu-Yuan; Deng, Kai-Wen; Luo, Jie-Ying; Liu, Wei; Liu, Wen-Long; Deng, Chang-Qing

    2007-03-01

    To establish the mathematical kinetic model of the components extracted from the FTMC (formulae of the traditional Chinese medicine) and analyze parameters of the astragaloside IV extracted from the BYHWD (Buyang Huanwu decoction). The model, including algebra and differential groups, have been set up according to the FICK discipline and Noyes-whitney soluted theories, as well as two transfer diffusive processes ((1) from protoplasate to apoplasmic, also from material compartment interior cell membrane to outside compartment; (2) apoplasmic to solution, also from outside compartment to solvent compartment) on components extraction from the FTMC. The equation groups, according to laplace transform, have been given a expression as solutions, which indicate the quantitative changes of the component concentration in solvent vs. time. The model kinetic parameters have been analyzed, meanwhile the parameters of the astragaloside IV in the BYHWD under 100 degrees C, extracted by water, have been analyzed by way of this model: It has been established a mathematical model that consists of three parts of e exponent. The kinetic parameters: M, alpha, N, beta, L, pi, K, k1', k2', rho1, rho2, tmax, Cmax, AUC, w0, P, D of the BYHWD were respectivelly 0.061 27% , 0.280 2 min(-1), - 1.027% , 0.008 965 min(-1), 1.077%, 0.002 665 min(-1), 3.451 x 10(-3) min(-1), 3.188 x 10(-3) min(-1), 0.375 9 min(-1), 1.420 min, 0.754 7 min, 184.9 min, 0. 0572 1 mg x mL(-1), 289.9 min, 0.070 11%, 46.24%, 22. 35%. The kinetic model, applied to isolated system, can have been of the rule of multiplex linear. Each parameters can be analyzed completely.

  9. Response Surface Modeling of Combined-Cycle Propulsion Components using Computational Fluid Dynamics

    Science.gov (United States)

    Steffen, C. J., Jr.

    2002-01-01

    Three examples of response surface modeling with CFD are presented for combined cycle propulsion components. The examples include a mixed-compression-inlet during hypersonic flight, a hydrogen-fueled scramjet combustor during hypersonic flight, and a ducted-rocket nozzle during all-rocket flight. Three different experimental strategies were examined, including full factorial, fractionated central-composite, and D-optimal with embedded Plackett-Burman designs. The response variables have been confined to integral data extracted from multidimensional CFD results. Careful attention to uncertainty assessment and modeling bias has been addressed. The importance of automating experimental setup and effectively communicating statistical results are emphasized.

  10. Mixture estimation with state-space components and Markov model of switching

    Czech Academy of Sciences Publication Activity Database

    Nagy, Ivan; Suzdaleva, Evgenia

    2013-01-01

    Roč. 37, č. 24 (2013), s. 9970-9984 ISSN 0307-904X R&D Projects: GA TA ČR TA01030123 Institutional support: RVO:67985556 Keywords : probabilistic dynamic mixtures, * probability density function * state-space models * recursive mixture estimation * Bayesian dynamic decision making under uncertainty * Kerridge inaccuracy Subject RIV: BC - Control Systems Theory Impact factor: 2.158, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/nagy-mixture estimation with state-space components and markov model of switching.pdf

  11. Two-component Thermal Dust Emission Model: Application to the Planck HFI Maps

    Science.gov (United States)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2014-06-01

    We present full-sky, 6.1 arcminute resolution maps of dust optical depth and temperature derived by fitting the Finkbeiner et al. (1999) two-component dust emission model to the Planck HFI and IRAS 100 micron maps. This parametrization of the far infrared thermal dust SED as the sum of two modified blackbodies serves as an important alternative to the commonly adopted single modified blackbody dust emission model. We expect our Planck-based maps of dust temperature and optical depth to form the basis for a next-generation, high-resolution extinction map which will additionally incorporate small-scale detail from WISE imaging.

  12. Observation Data Model Core Components, its Implementation in the Table Access Protocol Version 1.1

    Science.gov (United States)

    Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel; Michel, Laurent; Bonnarel, Francos; Micol, Alberto; IVOA DataModel Working Group; Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel

    2017-05-01

    This document defines the core components of the Observation data model that are necessary to perform data discovery when querying data centers for astronomical observations of interest. It exposes use-cases to be carried out, explains the model and provides guidelines for its implementation as a data access service based on the Table Access Protocol (TAP). It aims at providing a simple model easy to understand and to implement by data providers that wish to publish their data into the Virtual Observatory. This interface integrates data modeling and data access aspects in a single service and is named ObsTAP. It will be referenced as such in the IVOA registries. In this document, the Observation Data Model Core Components (ObsCoreDM) defines the core components of queryable metadata required for global discovery of observational data. It is meant to allow a single query to be posed to TAP services at multiple sites to perform global data discovery without having to understand the details of the services present at each site. It defines a minimal set of basic metadata and thus allows for a reasonable cost of implementation by data providers. The combination of the ObsCoreDM with TAP is referred to as an ObsTAP service. As with most of the VO Data Models, ObsCoreDM makes use of STC, Utypes, Units and UCDs. The ObsCoreDM can be serialized as a VOTable. ObsCoreDM can make reference to more complete data models such as Characterisation DM, Spectrum DM or Simple Spectral Line Data Model (SSLDM). ObsCore shares a large set of common concepts with DataSet Metadata Data Model (Cresitello-Dittmar et al. 2016) which binds together most of the data model concepts from the above models in a comprehensive and more general frame work. This current specification on the contrary provides guidelines for implementing these concepts using the TAP protocol and answering ADQL queries. It is dedicated to global discovery.

  13. Modelling malaria incidence by an autoregressive distributed lag model with spatial component.

    Science.gov (United States)

    Laguna, Francisco; Grillet, María Eugenia; León, José R; Ludeña, Carenne

    2017-08-01

    The influence of climatic variables on the dynamics of human malaria has been widely highlighted. Also, it is known that this mosquito-borne infection varies in space and time. However, when the data is spatially incomplete most popular spatio-temporal methods of analysis cannot be applied directly. In this paper, we develop a two step methodology to model the spatio-temporal dependence of malaria incidence on local rainfall, temperature, and humidity as well as the regional sea surface temperatures (SST) in the northern coast of Venezuela. First, we fit an autoregressive distributed lag model (ARDL) to the weekly data, and then, we adjust a linear separable spacial vectorial autoregressive model (VAR) to the residuals of the ARDL. Finally, the model parameters are tuned using a Markov Chain Monte Carlo (MCMC) procedure derived from the Metropolis-Hastings algorithm. Our results show that the best model to account for the variations of malaria incidence from 2001 to 2008 in 10 endemic Municipalities in North-Eastern Venezuela is a logit model that included the accumulated local precipitation in combination with the local maximum temperature of the preceding month as positive regressors. Additionally, we show that although malaria dynamics is highly heterogeneous in space, a detailed analysis of the estimated spatial parameters in our model yield important insights regarding the joint behavior of the disease incidence across the different counties in our study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Analysis of turbulence models for thermohydraulic calculations of helium cooled fusion reactor components

    Energy Technology Data Exchange (ETDEWEB)

    Arbeiter, F. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany); Gordeev, S. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany)]. E-mail: gordeev@irs.fzk.de; Heinzel, V. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany); Slobodtchouk, V. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany)

    2006-02-15

    The aim of the present work is to choose an optimal use of CFD codes for thermohydraulic calculation of the helium cooled fusion reactor components, such as divertor module, test blanket module and International Fusion Materials Irradiation Facility (IFMIF) test modules. In spite of common features (intense heat flux, nuclear heating of the structure, helium-cooling), all these components have different boundary conditions, such as helium temperature, pressure and heating rate and different geometries. It is the reason for the appearance of some effects in the flow influencing significantly the heat transfer. A number of turbulence models offered by the commercial STAR-CD code were tested on the experiments carried out in the Forschungszentrum Karlsruhe (FZK) and on the experimental data from the scientific publications. Results of different turbulence models are compared and analysed. For geometrically simple channel flows with significant gas property variation low-Re number k-{epsilon} models with damping functions give more accurate results and are more appropriate for the conditions of the IFMIF HFTM. The heat transfer in regions with flow impingement is well predicted by turbulence models, which include different limiters in the turbulence production. Most reliable turbulence models were chosen for the thermohydraulic analysis.

  15. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Science.gov (United States)

    O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D

    2015-01-01

    Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.

  16. Component simulation in problems of calculated model formation of automatic machine mechanisms

    Directory of Open Access Journals (Sweden)

    Telegin Igor

    2017-01-01

    Full Text Available The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gaps in kinematic pairs, friction forces, design and technological loads. As an example in the paper there are considered a formalization of stages in the computer model formation of the cutting mechanism in cold stamping automatic machine AV1818 and methods of for the computation of their parameters on the basis of its solid-state model.

  17. Potential for monitoring soil erosion features and soil erosion modeling components from remotely sensed data

    Science.gov (United States)

    Langran, K. J.

    1983-01-01

    Accurate estimates of soil erosion and its effects on soil productivity are essential in agricultural decision making and planning from the field scale to the national level. Erosion models have been primarily developed for designing erosion control systems, predicting sediment yield for reservoir design, predicting sediment transport, and simulating water quality. New models proposed are more comprehensive in that the necessary components (hydrology, erosion-sedimentation, nutrient cycling, tillage, etc.) are linked in a model appropriate for studying the erosion-productivity problem. Recent developments in remote sensing systems, such as Landsat Thematic Mapper, Shuttle Imaging Radar (SIR-B), etc., can contribute significantly to the future development and operational use of these models.

  18. The Effect of Multidimensional Motivation Interventions on Cognitive and Behavioral Components of Motivation: Testing Martin's Model

    Directory of Open Access Journals (Sweden)

    Fatemeh PooraghaRoodbarde

    2017-04-01

    Full Text Available Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation.Methods: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group constituted the sample of the study, and they were selected by available sampling method. Motivation interventions were implemented for fifteen 60-minute sessions 3 times a week, which lasted for about 2 months. Data were analyzed using repeated measures multivariate variance analysis test.Results: The findings revealed that multidimensional motivation interventions resulted in a significant increase in the scores of cognitive components such as self-efficacy, mastery goal, test anxiety, and feeling of lack of control, and behavioral components such as task management. The results of one-month follow-up indicated the stability of the created changes in test anxiety and cognitive strategies; however, no significant difference was found between the 2 groups at the follow-up in self-efficacy, mastery goals, source of control, and motivation.Conclusions: The research evidence indicated that academic motivation is a multidimensional component and is affected by cognitive and behavioral factors; therefore, researchers, teachers, and other authorities should attend to these factors to increase academic motivation.

  19. The Effect of Multidimensional Motivation Interventions on Cognitive and Behavioral Components of Motivation: Testing Martin's Model.

    Science.gov (United States)

    Pooragha Roodbarde, Fatemeh; Talepasand, Siavash; Rahimian Boogar, Issac

    2017-04-01

    Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation. Method: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group) constituted the sample of the study, and they were selected by available sampling method. Motivation interventions were implemented for fifteen 60-minute sessions 3 times a week, which lasted for about 2 months. Data were analyzed using repeated measures multivariate variance analysis test. Results: The findings revealed that multidimensional motivation interventions resulted in a significant increase in the scores of cognitive components such as self-efficacy, mastery goal, test anxiety, and feeling of lack of control, and behavioral components such as task management. The results of one-month follow-up indicated the stability of the created changes in test anxiety and cognitive strategies; however, no significant difference was found between the 2 groups at the follow-up in self-efficacy, mastery goals, source of control, and motivation. Conclusion: The research evidence indicated that academic motivation is a multidimensional component and is affected by cognitive and behavioral factors; therefore, researchers, teachers, and other authorities should attend to these factors to increase academic motivation.

  20. A molecular systems approach to modelling human skin pigmentation: identifying underlying pathways and critical components.

    Science.gov (United States)

    Raghunath, Arathi; Sambarey, Awanti; Sharma, Neha; Mahadevan, Usha; Chandra, Nagasuma

    2015-04-29

    Ultraviolet radiations (UV) serve as an environmental stress for human skin, and result in melanogenesis, with the pigment melanin having protective effects against UV induced damage. This involves a dynamic and complex regulation of various biological processes that results in the expression of melanin in the outer most layers of the epidermis, where it can exert its protective effect. A comprehensive understanding of the underlying cross talk among different signalling molecules and cell types is only possible through a systems perspective. Increasing incidences of both melanoma and non-melanoma skin cancers necessitate the need to better comprehend UV mediated effects on skin pigmentation at a systems level, so as to ultimately evolve knowledge-based strategies for efficient protection and prevention of skin diseases. A network model for UV-mediated skin pigmentation in the epidermis was constructed and subjected to shortest path analysis. Virtual knock-outs were carried out to identify essential signalling components. We describe a network model for UV-mediated skin pigmentation in the epidermis. The model consists of 265 components (nodes) and 429 directed interactions among them, capturing the manner in which one component influences the other and channels information. Through shortest path analysis, we identify novel signalling pathways relevant to pigmentation. Virtual knock-outs or perturbations of specific nodes in the network have led to the identification of alternate modes of signalling as well as enabled determining essential nodes in the process. The model presented provides a comprehensive picture of UV mediated signalling manifesting in human skin pigmentation. A systems perspective helps provide a holistic purview of interconnections and complexity in the processes leading to pigmentation. The model described here is extensive yet amenable to expansion as new data is gathered. Through this study, we provide a list of important proteins essential

  1. A Four–Component Model of Age–Related Memory Change

    Science.gov (United States)

    Healey, M. Karl; Kahana, Michael J.

    2015-01-01

    We develop a novel, computationally explicit, theory of age–related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that includes aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates four components: 1) the ability to sustain attention across an encoding episode, 2) the ability to retrieve contextual representations for use as retrieval cues, 3) the ability to monitor retrievals and reject intrusions, and 4) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the four–component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus we provide a four–component theory of a complex pattern of age differences across two key laboratory tasks. PMID:26501233

  2. A four-component model of age-related memory change.

    Science.gov (United States)

    Healey, M Karl; Kahana, Michael J

    2016-01-01

    We develop a novel, computationally explicit, theory of age-related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that include aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates 4 components: (a) the ability to sustain attention across an encoding episode, (b) the ability to retrieve contextual representations for use as retrieval cues, (c) the ability to monitor retrievals and reject intrusions, and (d) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the 4-component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus, we provide a 4-component theory of a complex pattern of age differences across 2 key laboratory tasks. (c) 2015 APA, all rights reserved).

  3. Mind the gaps: a state-space model for analysing the dynamics of North Sea herring spawning components

    DEFF Research Database (Denmark)

    Payne, Mark

    2010-01-01

    , the sum of the fitted abundance indices across all components proves an excellent proxy for the biomass of the total stock, even though the model utilizes information at the individual-component level. The Orkney–Shetland component appears to have recovered faster from historic depletion events than...... the other components, whereas the Downs component has been the slowest. These differences give rise to changes in stock composition, which are shown to vary widely within a relatively short time. The modelling framework provides a valuable tool for studying and monitoring the dynamics of the individual...

  4. A curved multi-component aerosol hygroscopicity model framework: Part 2 – Including organic compounds

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2005-01-01

    Full Text Available This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Kohler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5–6 dicarboxylic acids, down to low relative humidity conditions. By comparison with electrodynamic balance data, it was also found that the model was capable of capturing the behaviour of aqueous aerosols containing Suwannee River Fulvic acid, a structure previously used to represent the functionality of complex oxidised macromolecules often found in atmospheric aerosols. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between model predictions and measurements increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly

  5. A curved multi-component aerosol hygroscopicity model framework: Part 2 Including organic compounds

    Science.gov (United States)

    Topping, D. O.; McFiggans, G. B.; Coe, H.

    2005-05-01

    This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM) framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Kohler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5-6 dicarboxylic acids, down to low relative humidity conditions. By comparison with electrodynamic balance data, it was also found that the model was capable of capturing the behaviour of aqueous aerosols containing Suwannee River Fulvic acid, a structure previously used to represent the functionality of complex oxidised macromolecules often found in atmospheric aerosols. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between model predictions and measurements increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly soluble in water

  6. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  7. Bee venom and its component apamin as neuroprotective agents in a Parkinson disease mouse model.

    Directory of Open Access Journals (Sweden)

    Daniel Alvarez-Fischer

    Full Text Available Bee venom has recently been suggested to possess beneficial effects in the treatment of Parkinson disease (PD. For instance, it has been observed that bilateral acupoint stimulation of lower hind limbs with bee venom was protective in the acute 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP mouse model of PD. In particular, a specific component of bee venom, apamin, has previously been shown to have protective effects on dopaminergic neurons in vitro. However, no information regarding a potential protective action of apamin in animal models of PD is available to date. The specific goals of the present study were to (i establish that the protective effect of bee venom for dopaminergic neurons is not restricted to acupoint stimulation, but can also be observed using a more conventional mode of administration and to (ii demonstrate that apamin can mimic the protective effects of a bee venom treatment on dopaminergic neurons. Using the chronic mouse model of MPTP/probenecid, we show that bee venom provides sustained protection in an animal model that mimics the chronic degenerative process of PD. Apamin, however, reproduced these protective effects only partially, suggesting that other components of bee venom enhance the protective action of the peptide.

  8. What Time Is Sunrise? Revisiting the Refraction Component of Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.; Hilton, James Lindsay

    2017-01-01

    Algorithms that predict sunrise and sunset times currently have an error of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, even including difficulties determining when the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction. We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We then compare these predictions with data sets of observed rise/set times to create a better model. Sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem. While there are a few data sets available, we will also begin collecting this data using smartphones as part of a citizen science project. The mobile application for this project will be available in the Google Play store. Data analysis will lead to more complete models that will provide more accurate rise/set times for the benefit of astronomers, navigators, and outdoorsmen everywhere.

  9. Two-component Jaffe models with a central black hole - I. The spherical case

    Science.gov (United States)

    Ciotti, Luca; Ziaee Lorzad, Azadeh

    2018-02-01

    Dynamical properties of spherically symmetric galaxy models where both the stellar and total mass density distributions are described by the Jaffe (1983) profile (with different scalelengths and masses) are presented. The orbital structure of the stellar component is described by Osipkov-Merritt anisotropy, and a black hole (BH) is added at the centre of the galaxy; the dark matter halo is isotropic. First, the conditions required to have a nowhere negative and monotonically decreasing dark matter halo density profile are derived. We then show that the phase-space distribution function can be recovered by using the Lambert-Euler W function, while in absence of the central BH only elementary functions appears in the integrand of the inversion formula. The minimum value of the anisotropy radius for consistency is derived in terms of the galaxy parameters. The Jeans equations for the stellar component are solved analytically, and the projected velocity dispersion at the centre and at large radii are also obtained analytically for generic values of the anisotropy radius. Finally, the relevant global quantities entering the Virial Theorem are computed analytically, and the fiducial anisotropy limit required to prevent the onset of Radial Orbit Instability is determined as a function of the galaxy parameters. The presented models, even though highly idealized, represent a substantial generalization of the models presented in Ciotti, and can be useful as starting point for more advanced modelling, the dynamics and the mass distribution of elliptical galaxies.

  10. Bee venom and its component apamin as neuroprotective agents in a Parkinson disease mouse model.

    Science.gov (United States)

    Alvarez-Fischer, Daniel; Noelker, Carmen; Vulinović, Franca; Grünewald, Anne; Chevarin, Caroline; Klein, Christine; Oertel, Wolfgang H; Hirsch, Etienne C; Michel, Patrick P; Hartmann, Andreas

    2013-01-01

    Bee venom has recently been suggested to possess beneficial effects in the treatment of Parkinson disease (PD). For instance, it has been observed that bilateral acupoint stimulation of lower hind limbs with bee venom was protective in the acute 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) mouse model of PD. In particular, a specific component of bee venom, apamin, has previously been shown to have protective effects on dopaminergic neurons in vitro. However, no information regarding a potential protective action of apamin in animal models of PD is available to date. The specific goals of the present study were to (i) establish that the protective effect of bee venom for dopaminergic neurons is not restricted to acupoint stimulation, but can also be observed using a more conventional mode of administration and to (ii) demonstrate that apamin can mimic the protective effects of a bee venom treatment on dopaminergic neurons. Using the chronic mouse model of MPTP/probenecid, we show that bee venom provides sustained protection in an animal model that mimics the chronic degenerative process of PD. Apamin, however, reproduced these protective effects only partially, suggesting that other components of bee venom enhance the protective action of the peptide.

  11. A study of key features of random atmospheric disturbance models for the approach flight phase

    Science.gov (United States)

    Heffley, R. K.

    1977-01-01

    An analysis and brief simulator experiment were performed to identify and classify important features of random turbulence for the landing approach flight phase. The analysis of various wind models was carried out within the context of the longitudinal closed-loop pilot/vehicle system. The analysis demonstrated the relative importance of atmospheric disturbance scale lengths, horizontal versus vertical gust components, decreasing altitude, and spectral forms of disturbances versus the pilot/vehicle system. Among certain competing wind models, the analysis predicted no significant difference in pilot performance. This was confirmed by a moving base simulator experiment which evaluated the two most extreme models. A number of conclusions were reached: attitude constrained equations do provide a simple but effective approach to describing the closed-loop pilot/vehicle. At low altitudes the horizontal gust component dominates pilot/vehicle performance.

  12. Research on CO2 ejector component efficiencies by experiment measurement and distributed-parameter modeling

    International Nuclear Information System (INIS)

    Zheng, Lixing; Deng, Jianqiang

    2017-01-01

    Highlights: • The ejector distributed-parameter model is developed to study ejector efficiencies. • Feasible component and total efficiency correlations of ejector are established. • New efficiency correlations are applied to obtain dynamic characteristics of EERC. • More suitable fixed efficiency value can be determined by the proposed correlations. - Abstract: In this study we combine the experimental measurement data and the theoretical model of ejector to determine CO 2 ejector component efficiencies including the motive nozzle, suction chamber, mixing section, diffuser as well as the total ejector efficiency. The ejector is modeled utilizing the distributed-parameter method, and the flow passage is divided into a number of elements and the governing equations are formulated based on the differential equation of mass, momentum and energy conservation. The efficiencies of ejector are investigated under different ejector geometric parameters and operational conditions, and the corresponding empirical correlations are established. Moreover, the correlations are incorporated into a transient model of transcritical CO 2 ejector expansion refrigeration cycle (EERC) and the dynamic simulations is performed based on variable component efficiencies and fixed values. The motive nozzle, suction chamber, mixing section and diffuser efficiencies vary from 0.74 to 0.89, 0.86 to 0.96, 0.73 to 0.9 and 0.75 to 0.95 under the studied conditions, respectively. The response diversities of suction flow pressure and discharge pressure are obvious between the variable efficiencies and fixed efficiencies referring to the previous studies, while when the fixed value is determined by the presented correlations, their response differences are basically the same.

  13. Towards a three-component model of fan loyalty: a case study of Chinese youth.

    Directory of Open Access Journals (Sweden)

    Xiao-xiao Zhang

    Full Text Available The term "fan loyalty" refers to the loyalty felt and expressed by a fan towards the object of his/her fanaticism in both everyday and academic discourses. However, much of the literature on fan loyalty has paid little attention to the topic from the perspective of youth pop culture. The present study explored the meaning of fan loyalty in the context of China. Data were collected by the method of in-depth interviews with 16 young Chinese people aged between 19 and 25 years who currently or once were pop fans. The results indicated that fan loyalty entails three components: involvement, satisfaction, and affiliation. These three components regulate the process of fan loyalty development, which can be divided into four stages: inception, upgrade, zenith, and decline. This model provides a conceptual explanation of why and how young Chinese fans are loyal to their favorite stars. The implications of the findings are discussed.

  14. Model of components in a process of acoustic diagnosis correlated with learning

    International Nuclear Information System (INIS)

    Seballos, S.; Costabal, H.; Matamala, P.

    1992-06-01

    Using Linden's functional scheme as a theoretical reference framework, we define a matrix of component for clinical and field applications in the acoustic diagnostic process and correlations with audiologic, learning and behavioral problems. It is expected that the model effectively contributes to classify and provide a greater knowledge about this multidisciplinary problem. Although the exact nature of this component is at present a matter to be defined, its correlation can be hypothetically established. Applying this descriptive and integral approach in the diagnostic process it is possible if not to avoid, at least to decrease, the uncertainties and assure the proper solutions becoming a powerful tool applicable to environmental studies and/or social claims. (author). 8 refs, 2 figs

  15. High frequent modelling of a modular multilevel converter using passive components

    DEFF Research Database (Denmark)

    El-Khatib, Walid Ziad; Holbøll, Joachim; Rasmussen, Tonny Wederberg

    2013-01-01

    ). This means that a high frequency model of the converter has to be designed, which gives a better overview of the impact of high frequency transients etc. The functionality of the model is demonstrated by application to grid connections of off-shore wind power plants. Grid connection of an offshore wind power...... plant using HVDC fundamentally changes the electrical environment for the power plant. Detailed knowledge and understanding of the characteristics and behavior of all relevant power system components under all conditions, including under transients, are required in order to develop reliable offshore...... wind power plant employing HVDC. In the present study, a back to back HVDC transmission system is designed in PSCAD/EMTDC. Simulations and results showing the importance of high frequent modeling are presented....

  16. An Evaluation of Semiempirical Models for Partitioning Photosynthetically Active Radiation Into Diffuse and Direct Beam Components

    Science.gov (United States)

    Oliphant, Andrew J.; Stoy, Paul C.

    2018-03-01

    Photosynthesis is more efficient under diffuse than direct beam photosynthetically active radiation (PAR) per unit PAR, but diffuse PAR is infrequently measured at research sites. We examine four commonly used semiempirical models (Erbs et al., 1982, https://doi.org/10.1016/0038-092X(82)90302-4; Gu et al., 1999, https://doi.org/10.1029/1999JD901068; Roderick, 1999, https://doi.org/10.1016/S0168-1923(99)00028-3; Weiss & Norman, 1985, https://doi.org/10.1016/0168-1923(85)90020-6) that partition PAR into diffuse and direct beam components based on the negative relationship between atmospheric transparency and scattering of PAR. Radiation observations at 58 sites (140 site years) from the La Thuille FLUXNET data set were used for model validation and coefficient testing. All four models did a reasonable job of predicting the diffuse fraction of PAR (ϕ) at the 30 min timescale, with site median r2 values ranging between 0.85 and 0.87, model efficiency coefficients (MECs) between 0.62 and 0.69, and regression slopes within 10% of unity. Model residuals were not strongly correlated with astronomical or standard meteorological variables. We conclude that the Roderick (1999, https://doi.org/10.1016/S0168-1923(99)00028-3) and Gu et al. (1999, https://doi.org/10.1029/1999JD901068) models performed better overall than the two older models. Using the basic form of these models, the data set was used to find both individual site and universal model coefficients that optimized predictive accuracy. A new universal form of the model is presented in section 5 that increased site median MEC to 0.73. Site-specific model coefficients increased median MEC further to 0.78, indicating usefulness of local/regional training of coefficients to capture the local distributions of aerosols and cloud types.

  17. Accurate temperature model for absorptance determination of optical components with laser calorimetry.

    Science.gov (United States)

    Wang, Yanru; Li, Bincheng

    2011-03-20

    In the international standard (International Organization for Standardization 11551) for measuring the absorptance of optical components (i.e., laser calorimetry), the absorptance is obtained by fitting the temporal behavior of laser irradiation-induced temperature rise to a homogeneous temperature model in which the infinite thermal conductivity of the sample is assumed. In this paper, an accurate temperature model, in which both the finite thermal conductivity and size of the sample are taken into account, is developed to fit the experimental temperature data for a more precise determination of the absorptance. The difference and repeatability of the results fitted with the two theoretical models for the same experimental data are compared. The optimum detection position when the homogeneous model is employed in the data-fitting procedure is also analyzed with the accurate temperature model. The results show that the optimum detection location optimized for a wide thermal conductivity range of 0.2-50W/m·K moves toward the center of the sample as the sample thickness increases and deviates from the center as the radius and irradiation time increase. However, if the detection position is optimized for an individual sample with known sample size and thermal conductivity by applying the accurate temperature model, the influence of the finite thermal conductivity and sample size on the absorptance determination can be fully compensated for by fitting the temperature data recorded at the optimum detection position to the homogeneous temperature model.

  18. A general maximum likelihood analysis of variance components in generalized linear models.

    Science.gov (United States)

    Aitkin, M

    1999-03-01

    This paper describes an EM algorithm for nonparametric maximum likelihood (ML) estimation in generalized linear models with variance component structure. The algorithm provides an alternative analysis to approximate MQL and PQL analyses (McGilchrist and Aisbett, 1991, Biometrical Journal 33, 131-141; Breslow and Clayton, 1993; Journal of the American Statistical Association 88, 9-25; McGilchrist, 1994, Journal of the Royal Statistical Society, Series B 56, 61-69; Goldstein, 1995, Multilevel Statistical Models) and to GEE analyses (Liang and Zeger, 1986, Biometrika 73, 13-22). The algorithm, first given by Hinde and Wood (1987, in Longitudinal Data Analysis, 110-126), is a generalization of that for random effect models for overdispersion in generalized linear models, described in Aitkin (1996, Statistics and Computing 6, 251-262). The algorithm is initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, but with only slight variation it can be used for a completely unknown mixing distribution, giving a straightforward method for the fully nonparametric ML estimation of this distribution. This is of value because the ML estimates of the GLM parameters can be sensitive to the specification of a parametric form for the mixing distribution. The nonparametric analysis can be extended straightforwardly to general random parameter models, with full NPML estimation of the joint distribution of the random parameters. This can produce substantial computational saving compared with full numerical integration over a specified parametric distribution for the random parameters. A simple method is described for obtaining correct standard errors for parameter estimates when using the EM algorithm. Several examples are discussed involving simple variance component and longitudinal models, and small-area estimation.

  19. Combining random walk and regression models to understand solvation in multi-component solvent systems.

    Science.gov (United States)

    Gale, Ella M; Johns, Marcus A; Wirawan, Remigius H; Scott, Janet L

    2017-07-21

    Polysaccharides, such as cellulose, are often processed by dissolution in solvent mixtures, e.g. an ionic liquid (IL) combined with a dipolar aprotic co-solvent (CS) that the polymer does not dissolve in. A multi-walker, discrete-time, discrete-space 1-dimensional random walk can be applied to model solvation of a polymer in a multi-component solvent mixture. The number of IL pairs in a solvent mixture and the number of solvent shells formable, x, is associated with n, the model time-step, and N, the number of random walkers. The mean number of distinct sites visited is proportional to the amount of polymer soluble in a solution. By also fitting a polynomial regression model to the data, we can associate the random walk terms with chemical interactions between components and probe where the system deviates from a 1-D random walk. The 'frustration' between solvents shells is given as ln x in the random walk model and as a negative IL:IL interaction term in the regression model. This frustration appears in regime II of the random walk model (high volume fractions of IL) where walkers interfere with each other, and the system tends to its limiting behaviour. In the low concentration regime, (regime I) the solvent shells do not interact, and the system depends only on IL and CS terms. In both models (and both regimes), the system is almost entirely controlled by the volume available to solvation shells, and thus is a counting/space-filling problem, where the molar volume of the CS is important. Small deviations are observed when there is an IL-CS interaction. The use of two models, built on separate approaches, confirm these findings, demonstrating that this is a real effect and offering a route to identifying such systems. Specifically, the majority of CSs - such as dimethylformide - follow the random walk model, whilst 1-methylimidazole, dimethyl sulfoxide, 1,3-dimethyl-2-imidazolidinone and tetramethylurea offer a CS-mediated improvement and propylene carbonate

  20. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua; Alfonsi, Andrea; Askin Guler; Tunc Aldemir

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper represents an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation

  1. A curved multi-component aerosol hygroscopicity model framework: 2 Including organics

    Science.gov (United States)

    Topping, D. O.; McFiggans, G. B.; Coe, H.

    2004-12-01

    This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM) framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Köhler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5-6 dicarboxylic acids, down to low relative humidity conditions. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between predicted and measured data increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly soluble in water, significant deviations from measured surface tension depression behaviour were predicted with both model formalisms tested. A Sensitivity analysis showed that such variation is likely to lead to predicted growth factors within the measurement uncertainty for growth factor taken in the sub-saturated regime. Greater

  2. Designing a Component-Based Architecture for the Modeling and Simulation of Nuclear Fuels and Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Billings, Jay Jay [ORNL; Elwasif, Wael R [ORNL; Hively, Lee M [ORNL; Bernholdt, David E [ORNL; Hetrick III, John M [ORNL; Bohn, Tim T [ORNL

    2009-01-01

    Concerns over the environment and energy security have recently prompted renewed interest in the U.S. in nuclear energy. Recognizing this, the U.S. Dept. of Energy has launched an initiative to revamp and modernize the role that modeling and simulation plays in the development and operation of nuclear facilities. This Nuclear Energy Advanced Modeling and Simulation (NEAMS) program represents a major investment in the development of new software, with one or more large multi-scale multi-physics capabilities in each of four technical areas associated with the nuclear fuel cycle, as well as additional supporting developments. In conjunction with this, we are designing a software architecture, computational environment, and component framework to integrate the NEAMS technical capabilities and make them more accessible to users. In this report of work very much in progress, we lay out the 'problem' we are addressing, describe the model-driven system design approach we are using, and compare them with several large-scale technical software initiatives from the past. We discuss how component technology may be uniquely positioned to address the software integration challenges of the NEAMS program, outline the capabilities planned for the NEAMS computational environment and framework, and describe some initial prototyping activities.

  3. Examining the Efficiency of Models Using Tangent Coordinates or Principal Component Scores in Allometry Studies.

    Science.gov (United States)

    Sigirli, Deniz; Ercan, Ilker

    2015-09-01

    Most of the studies in medical and biological sciences are related to the examination of geometrical properties of an organ or organism. Growth and allometry studies are important in the way of investigating the effects of diseases and the environmental factors effects on the structure of the organ or organism. Thus, statistical shape analysis has recently become more important in the medical and biological sciences. Shape is all geometrical information that remains when location, scale and rotational effects are removed from an object. Allometry, which is a relationship between size and shape, plays an important role in the development of statistical shape analysis. The aim of the present study was to compare two different models for allometry which includes tangent coordinates and principal component scores of tangent coordinates as dependent variables in multivariate regression analysis. The results of the simulation study showed that the model constructed by taking tangent coordinates as dependent variables is more appropriate than the model constructed by taking principal component scores of tangent coordinates as dependent variables, for all sample sizes.

  4. A participatory systems approach to modeling social, economic, and ecological components of bioenergy

    International Nuclear Information System (INIS)

    Buchholz, Thomas S.; Volk, Timothy A.; Luzadis, Valerie A.

    2007-01-01

    Availability of and access to useful energy is a crucial factor for maintaining and improving human well-being. Looming scarcities and increasing awareness of environmental, economic, and social impacts of conventional sources of non-renewable energy have focused attention on renewable energy sources, including biomass. The complex interactions of social, economic, and ecological factors among the bioenergy system components of feedstock supply, conversion technology, and energy allocation have been a major obstacle to the broader development of bioenergy systems. For widespread implementation of bioenergy to occur there is a need for an integrated approach to model the social, economic, and ecological interactions associated with bioenergy. Such models can serve as a planning and evaluation tool to help decide when, where, and how bioenergy systems can contribute to development. One approach to integrated modeling is by assessing the sustainability of a bioenergy system. The evolving nature of sustainability can be described by an adaptive systems approach using general systems principles. Discussing these principles reveals that participation of stakeholders in all components of a bioenergy system is a crucial factor for sustainability. Multi-criteria analysis (MCA) is an effective tool to implement this approach. This approach would enable decision-makers to evaluate bioenergy systems for sustainability in a participatory, transparent, timely, and informed manner

  5. Simulated lumbar minimally invasive surgery educational model with didactic and technical components.

    Science.gov (United States)

    Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James

    2013-10-01

    The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.

  6. Modified EFG Components and Their Joint pdf for Use in Modeling ihb in PAC

    Science.gov (United States)

    Adams, M.; Matheson, P.; Park, T.; Stufflebeam, M.; Hodges, J.; Evenson, W. E.; Zacate, M. O.

    2012-10-01

    Spectra of hyperfine interactions involving the electric field gradient tensor (EFG) are subject to broadening by statistical variations of EFG components. In perturbed angular correlation (PAC) experiments, the inhomogeneous broadening (ihb) of the G2(c,t) spectrum is produced by randomly distributed lattice defects of concentration, c. The EFG tensor has two independent components. The concentration dependence of ihb is determined by the joint probability distribution function (pdf) of these components. In typical PAC analyses, the independent coordinates are assumed to be Vzz and the asymmetry parameter η= (2Vxx+Vzz)/Vzz. However, the pdf P(c,Vzz,η) is not known, and in any case it is easy to show that Vzz and η are highly correlated, and not independent. We have found that the application of the Czjzek transformation [1], followed by a simple conformal mapping produces two, nearly independent EFG coordinates W1(c,Vzz,η) and W2(c,Vzz,η). The pdfs of each coordinate are readily characterized, and their product P(c,W1,W2)=P1(c,W1)P2(c,W2) forms an appropriate joint pdf that can be used to model ihb in a variety of situations. We show the application of this method by reporting results modeling the concentration dependence of ihb in various PAC models, for simple cubic (sc), face-centered cubic (fcc) and body-centered cubic (bcc) lattices. [4pt] [1] Czjzek, G. Hyperfine Interactions 14(1983) 189-194.

  7. The reduced kinome of Ostreococcus tauri: core eukaryotic signalling components in a tractable model species.

    Science.gov (United States)

    Hindle, Matthew M; Martin, Sarah F; Noordally, Zeenat B; van Ooijen, Gerben; Barrios-Llerena, Martin E; Simpson, T Ian; Le Bihan, Thierry; Millar, Andrew J

    2014-08-02

    The current knowledge of eukaryote signalling originates from phenotypically diverse organisms. There is a pressing need to identify conserved signalling components among eukaryotes, which will lead to the transfer of knowledge across kingdoms. Two useful properties of a eukaryote model for signalling are (1) reduced signalling complexity, and (2) conservation of signalling components. The alga Ostreococcus tauri is described as the smallest free-living eukaryote. With less than 8,000 genes, it represents a highly constrained genomic palette. Our survey revealed 133 protein kinases and 34 protein phosphatases (1.7% and 0.4% of the proteome). We conducted phosphoproteomic experiments and constructed domain structures and phylogenies for the catalytic protein-kinases. For each of the major kinases families we review the completeness and divergence of O. tauri representatives in comparison to the well-studied kinomes of the laboratory models Arabidopsis thaliana and Saccharomyces cerevisiae, and of Homo sapiens. Many kinase clades in O. tauri were reduced to a single member, in preference to the loss of family diversity, whereas TKL and ABC1 clades were expanded. We also identified kinases that have been lost in A. thaliana but retained in O. tauri. For three, contrasting eukaryotic pathways - TOR, MAPK, and the circadian clock - we established the subset of conserved components and demonstrate conserved sites of substrate phosphorylation and kinase motifs. We conclude that O. tauri satisfies our two central requirements. Several of its kinases are more closely related to H. sapiens orthologs than S. cerevisiae is to H. sapiens. The greatly reduced kinome of O. tauri is therefore a suitable model for signalling in free-living eukaryotes.

  8. Coaching younger practitioners and students using components of the co-active coaching model.

    Science.gov (United States)

    Tofade, Toyin

    2010-04-12

    Coaching is used to improve performance, achieve preset goals and obtain desired results. Several coaching models have been used in health professions for leadership and professional development. This article describes some components of Co-Active Coaching(R) that can be applied while coaching pharmacy students and younger practitioners. Co-Active Coaching requires the coach to use a broad range of communication skills, including listening, asking powerful questions, making insightful comments, offering encouragement, and giving sincere praise. The characteristics of the ideal candidate for coaching and the value of coaching are also discussed.

  9. Dynamic models of reduced order of main components of a M SR

    International Nuclear Information System (INIS)

    Garcia B, F. B.; Morales S, J. B.; Polo L, M. A.; Espinosa P, G.

    2011-11-01

    The reactors of melted salts called Molten Salt Fast Reactor (MSFR), have seen a resurgence of interest in the last decade. This design is one of the six proposed for the IV generation reactors. The most active development was in the middle of 1950 and principles of 1970 in the Oak Ridge National Laboratories (ORNL). In this work the mathematician modeling of the main components in the primary and secondary circuits of a M SR is presented. In particular the dynamics of the heat exchanger is analyzed and they are considered several materials to optimize the system thermodynamically. (Author)

  10. Colonization of components of a model hot water system by Legionella pneumophila.

    Science.gov (United States)

    Schofield, G M; Locci, R

    1985-02-01

    A model hot water distribution network was seeded with a virulent strain of Legionella pneumophila serotype 1. Ten weeks after inoculation, components of the system, which include aluminium discs, copper, stainless steel, silicone tubing, rubber and glass beads, were examined for colonization by L. pneumophila. The samples were stained with fluorescein-labelled antibodies to the strain and were examined with scanning electron microscopy. Colonization, which was accompanied by copious quantities of a slime-like debris, was heaviest on the rubber and least on the copper. Adherence to silicone tubing and stainless steel was observed.

  11. Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models

    OpenAIRE

    Wang, Quan

    2012-01-01

    Principal component analysis (PCA) is a popular tool for linear dimensionality reduction and feature extraction. Kernel PCA is the nonlinear form of PCA, which better exploits the complicated spatial structure of high-dimensional features. In this paper, we first review the basic ideas of PCA and kernel PCA. Then we focus on the reconstruction of pre-images for kernel PCA. We also give an introduction on how PCA is used in active shape models (ASMs), and discuss how kernel PCA can be applied ...

  12. Correlation inequalities for two-component hypercubic /varreverse arrowphi/4 models

    International Nuclear Information System (INIS)

    Soria, J.L.

    1988-01-01

    A collection of new and already known correlation inequalities is found for a family of two-component hypercubic /varreverse arrowphi/ 4 models, using techniques of duplicated variables, rotated correlation inequalities, and random walk representation. Among the interesting new inequalities are: rotated very special Dunlop-Newman inequality 2 ; /varreverse arrowphi//sub 1z/ 2 + /varreverse arrowphi//sub 2z/ 2 ≥ 0, rotated Griffiths I inequality 2 - /varreverse arrowphi//sub 2z/ 2 > ≥ 0, and anti-Lebowitz inequality u 4 1111 ≥ 0

  13. Implementation of intelligent nuclear material diagnosis module based on the component object model

    International Nuclear Information System (INIS)

    Lee, Sang Yoon; Song, Dae Yong; Ko, Won Il; Ha, Jang Ho; Kim, Ho Dong

    2003-08-01

    In this paper, the implementation techniques of intelligent nuclear material surveillance system based on the COM (Component Object Model) and SOM (Self Organized Mapping) was described. The surveillance system that is to be developed is consist of CCD cameras, neutron monitors, and PC for data acquisition. To develop the system, the properties of the COM based software development technology was investigated, and the characteristics of related platform APIs was summarized. This report could be used for the developers who want to develop the intelligent surveillance system for various experimental environments based on the DVR and sensors using Borland C++ Builder

  14. Atmospheric Constituents in GEOS-5: Components for an Earth System Model

    Science.gov (United States)

    Pawson, Steven; Douglass, Anne; Duncan, Bryan; Nielsen, Eric; Ott, Leslie; Strode, Sarah

    2011-01-01

    The GEOS-S model is being developed for weather and climate processes, including the implementation of "Earth System" components. While the stratospheric chemistry capabilities are mature, we are presently extending this to include predictions of the tropospheric composition and chemistry - this includes CO2, CH4, CO, nitrogen species, etc. (Aerosols are also implemented, but are beyond the scope of this paper.) This work will give an overview of our chemistry modules, the approaches taken to represent surface emissions and uptake of chemical species, and some studies of the sensitivity of the atmospheric circulation to changes in atmospheric composition. Results are obtained through focused experiments and multi-decadal simulations.

  15. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: raissaomarques@gmail.com, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  16. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano

    2017-01-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  17. THM modelling of buffer, backfill and other system components. Critical processes and scenarios

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Kristensson, Ola; Boergesson, Lennart; Dueck, Ann; Hernelind, Jan

    2010-03-01

    A number of critical thermo-hydro-mechanical processes and scenarios for the buffer, tunnel backfill and other filling components in the repository have been identified. These processes and scenarios representing different aspects of the repository evolution have been pinpointed and modelled. In total, 22 cases have been modelled. Most cases have been analysed with finite element (FE) calculations, using primarily the two codes Abaqus and Code B right. For some cases analytical methods have been used either to supplement the FE calculations or due to that the scenario has a character that makes it unsuitable or very difficult to use the FE method. Material models and element models and choice of parameters as well as presumptions have been stated for all modelling cases. In addition, the results have been analysed and conclusions drawn for each case. The uncertainties have also been analysed. Besides the information given for all cases studied, the codes and material models have been described in a separate so called data report

  18. Practical applications of the multi-component marine photosynthesis model (MCM

    Directory of Open Access Journals (Sweden)

    Dariusz Ficek

    2003-09-01

    Full Text Available This paper describes the applications and accuracy analyses of our multi-component model of marine photosynthesis, given in detail in Woźniak et al. (2003. We now describe an application of the model to determine quantities characterising the photosynthesis of marine algae, especially the quantum yield of photosynthesis and photosynthetic primary production. These calculations have permitted the analysis of the variability of these photosynthesis characteristics in a diversity of seas, at different seasons, and at different depths. Because of its structure, the model can be used as the "marine part" of break a "satellite" algorithm for monitoring primary production in the sea (the set of input data necessary for the calculations can be determined with remote sensing methods. With this in mind, in the present work, we have tested and verified the model using empirical data. The verification yielded satisfactory results: for example, the statistical errors in estimates of primary production in the water column for Case 1 Waters do not exceed 45%. Hence, this model is far more accurate than earlier, less complex models hitherto applied in satellite algorithms.

  19. THM modelling of buffer, backfill and other system components. Critical processes and scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Kristensson, Ola; Boergesson, Lennart; Dueck, Ann (Clay Technology AB, Lund (Sweden)); Hernelind, Jan (5T-Engineering AB, Vaesteraas (Sweden))

    2010-03-15

    A number of critical thermo-hydro-mechanical processes and scenarios for the buffer, tunnel backfill and other filling components in the repository have been identified. These processes and scenarios representing different aspects of the repository evolution have been pinpointed and modelled. In total, 22 cases have been modelled. Most cases have been analysed with finite element (FE) calculations, using primarily the two codes Abaqus and Code-Bright. For some cases analytical methods have been used either to supplement the FE calculations or due to that the scenario has a character that makes it unsuitable or very difficult to use the FE method. Material models and element models and choice of parameters as well as presumptions have been stated for all modelling cases. In addition, the results have been analysed and conclusions drawn for each case. The uncertainties have also been analysed. Besides the information given for all cases studied, the codes and material models have been described in a separate so called data report

  20. A finite element method based microwave heat transfer modeling of frozen multi-component foods

    Science.gov (United States)

    Pitchai, Krishnamoorthy

    Microwave heating is fast and convenient, but is highly non-uniform. Non-uniform heating in microwave cooking affects not only food quality but also food safety. Most food industries develop microwavable food products based on "cook-and-look" approach. This approach is time-consuming, labor intensive and expensive and may not result in optimal food product design that assures food safety and quality. Design of microwavable food can be realized through a simulation model which describes the physical mechanisms of microwave heating in mathematical expressions. The objective of this study was to develop a microwave heat transfer model to predict spatial and temporal profiles of various heterogeneous foods such as multi-component meal (chicken nuggets and mashed potato), multi-component and multi-layered meal (lasagna), and multi-layered food with active packages (pizza) during microwave heating. A microwave heat transfer model was developed by solving electromagnetic and heat transfer equations using finite element method in commercially available COMSOL Multiphysics v4.4 software. The microwave heat transfer model included detailed geometry of the cavity, phase change, and rotation of the food on the turntable. The predicted spatial surface temperature patterns and temporal profiles were validated against the experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The predicted spatial surface temperature profile of different multi-component foods was in good agreement with the corresponding experimental profiles in terms of hot and cold spot patterns. The root mean square error values of temporal profiles ranged from 5.8 °C to 26.2 °C in chicken nuggets as compared 4.3 °C to 4.7 °C in mashed potatoes. In frozen lasagna, root mean square error values at six locations ranged from 6.6 °C to 20.0 °C for 6 min of heating. A microwave heat transfer model was developed to include susceptor assisted microwave heating of a

  1. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    Science.gov (United States)

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  2. A multi-component model of the developing retinocollicular pathway incorporating axonal and synaptic growth.

    Directory of Open Access Journals (Sweden)

    Keith B Godfrey

    2009-12-01

    Full Text Available During development, neurons extend axons to different brain areas and produce stereotypical patterns of connections. The mechanisms underlying this process have been intensively studied in the visual system, where retinal neurons form retinotopic maps in the thalamus and superior colliculus. The mechanisms active in map formation include molecular guidance cues, trophic factor release, spontaneous neural activity, spike-timing dependent plasticity (STDP, synapse creation and retraction, and axon growth, branching and retraction. To investigate how these mechanisms interact, a multi-component model of the developing retinocollicular pathway was produced based on phenomenological approximations of each of these mechanisms. Core assumptions of the model were that the probabilities of axonal branching and synaptic growth are highest where the combined influences of chemoaffinity and trophic factor cues are highest, and that activity-dependent release of trophic factors acts to stabilize synapses. Based on these behaviors, model axons produced morphologically realistic growth patterns and projected to retinotopically correct locations in the colliculus. Findings of the model include that STDP, gradient detection by axonal growth cones and lateral connectivity among collicular neurons were not necessary for refinement, and that the instructive cues for axonal growth appear to be mediated first by molecular guidance and then by neural activity. Although complex, the model appears to be insensitive to variations in how the component developmental mechanisms are implemented. Activity, molecular guidance and the growth and retraction of axons and synapses are common features of neural development, and the findings of this study may have relevance beyond organization in the retinocollicular pathway.

  3. Modeling the variability of solar radiation data among weather stations by means of principal components analysis

    International Nuclear Information System (INIS)

    Zarzo, Manuel; Marti, Pau

    2011-01-01

    Research highlights: →Principal components analysis was applied to R s data recorded at 30 stations. → Four principal components explain 97% of the data variability. → The latent variables can be fitted according to latitude, longitude and altitude. → The PCA approach is more effective for gap infilling than conventional approaches. → The proposed method allows daily R s estimations at locations in the area of study. - Abstract: Measurements of global terrestrial solar radiation (R s ) are commonly recorded in meteorological stations. Daily variability of R s has to be taken into account for the design of photovoltaic systems and energy efficient buildings. Principal components analysis (PCA) was applied to R s data recorded at 30 stations in the Mediterranean coast of Spain. Due to equipment failures and site operation problems, time series of R s often present data gaps or discontinuities. The PCA approach copes with this problem and allows estimation of present and past values by taking advantage of R s records from nearby stations. The gap infilling performance of this methodology is compared with neural networks and alternative conventional approaches. Four principal components explain 66% of the data variability with respect to the average trajectory (97% if non-centered values are considered). A new method based on principal components regression was also developed for R s estimation if previous measurements are not available. By means of multiple linear regression, it was found that the latent variables associated to the four relevant principal components can be fitted according to the latitude, longitude and altitude of the station where data were recorded from. Additional geographical or climatic variables did not increase the predictive goodness-of-fit. The resulting models allow the estimation of daily R s values at any location in the area under study and present higher accuracy than artificial neural networks and some conventional approaches

  4. Precision modelling of M dwarf stars: the magnetic components of CM Draconis

    Science.gov (United States)

    MacDonald, J.; Mullan, D. J.

    2012-04-01

    The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4

  5. A microengineered vascularized bleeding model that integrates the principal components of hemostasis.

    Science.gov (United States)

    Sakurai, Yumiko; Hardy, Elaissa T; Ahn, Byungwook; Tran, Reginald; Fay, Meredith E; Ciciliano, Jordan C; Mannino, Robert G; Myers, David R; Qiu, Yongzhi; Carden, Marcus A; Baldwin, W Hunter; Meeks, Shannon L; Gilbert, Gary E; Jobe, Shawn M; Lam, Wilbur A

    2018-02-06

    Hemostasis encompasses an ensemble of interactions among platelets, coagulation factors, blood cells, endothelium, and hemodynamic forces, but current assays assess only isolated aspects of this complex process. Accordingly, here we develop a comprehensive in vitro mechanical injury bleeding model comprising an "endothelialized" microfluidic system coupled with a microengineered pneumatic valve that induces a vascular "injury". With perfusion of whole blood, hemostatic plug formation is visualized and "in vitro bleeding time" is measured. We investigate the interaction of different components of hemostasis, gaining insight into several unresolved hematologic issues. Specifically, we visualize and quantitatively demonstrate: the effect of anti-platelet agent on clot contraction and hemostatic plug formation, that von Willebrand factor is essential for hemostasis at high shear, that hemophilia A blood confers unstable hemostatic plug formation and altered fibrin architecture, and the importance of endothelial phosphatidylserine in hemostasis. These results establish the versatility and clinical utility of our microfluidic bleeding model.

  6. Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System

    Science.gov (United States)

    Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao

    2017-12-01

    A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.

  7. A three-dimensional cellular automaton model for dendritic growth in multi-component alloys

    International Nuclear Information System (INIS)

    Zhang Xianfei; Zhao, Jiuzhou; Jiang, Hongxiang; Zhu, Mingfang

    2012-01-01

    A three-dimensional (3-D) cellular automaton model for dendritic growth in multi-component alloys is developed. The velocity of advance of the solid/liquid (S/L) interface is calculated using the solute conservation relationship at the S/L interface. The effect of interactions between the alloying elements on the diffusion coefficient of solutes in the solid and liquid phases are considered. The model is first validated by comparing with the theoretical predictions for binary and ternary alloys, and then applied to simulate the solidification process of Al–Cu–Mg alloys by a coupling of thermodynamic and kinetic calculations. The numerical results obtained show both the free dendrite growth process as well as the directional solidification process. The calculated secondary dendrite arm spacing in the directionally solidified Al–Cu–Mg alloy is in good agreement with the experimental results. The effect of interactions between the various alloying elements on dendritic growth is discussed.

  8. Combustion engine diagnosis model-based condition monitoring of gasoline and diesel engines and their components

    CERN Document Server

    Isermann, Rolf

    2017-01-01

    This book offers first a short introduction to advanced supervision, fault detection and diagnosis methods. It then describes model-based methods of fault detection and diagnosis for the main components of gasoline and diesel engines, such as the intake system, fuel supply, fuel injection, combustion process, turbocharger, exhaust system and exhaust gas aftertreatment. Additionally, model-based fault diagnosis of electrical motors, electric, pneumatic and hydraulic actuators and fault-tolerant systems is treated. In general series production sensors are used. It includes abundant experimental results showing the detection and diagnosis quality of implemented faults. Written for automotive engineers in practice, it is also of interest to graduate students of mechanical and electrical engineering and computer science. The Content Introduction.- I SUPERVISION, FAULT DETECTION AND DIAGNOSIS METHODS.- Supervision, Fault-Detection and Fault-Diagnosis Methods - a short Introduction.- II DIAGNOSIS OF INTERNAL COMBUST...

  9. Detailed measurements and modelling of thermo active components using a room size test facility

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    measurements in an office sized test facility with thermo active ceiling and floor as well as modelling of similar conditions in a computer program designed for analysis of building integrated heating and cooling systems. A method for characterizing the cooling capacity of thermo active components is described...... based on measurements of the energy balance of the thermo active deck. A cooling capacity of around 60W/m² at a temperature difference of 10K between room and fluid temperature has been found. It is also shown, that installing a lowered acoustic ceiling covering around 50% of the ceiling surface area...... only causes a reduction in the cooling capacity of around 10%. At the same time, the simulation model is able to reproduce the results from the measurements. Especially the heat flows are well predicted with a deviation of only a few percent, while the temperatures are not as well predicted, though...

  10. Modeling Earth's surface topography: decomposition of the static and dynamic components

    Science.gov (United States)

    Guerri, M.; Cammarano, F.; Tackley, P. J.

    2017-12-01

    Isolating the portion of topography supported by mantle convection, the so-called dynamic topography, would give us precious information on vigor and style of the convection itself. Contrasting results on the estimate of dynamic topography motivate us to analyse the sources of uncertainties affecting its modeling. We obtain models of mantle and crust density, leveraging on seismic and mineral physics constraints. We use the models to compute isostatic topography and residual topography maps. Estimates of dynamic topography and associated synthetic geoid are obtained by instantaneous mantle flow modeling. We test various viscosity profiles and 3D viscosity distributions accounting for inferred lateral variations in temperature. We find that the patterns of residual and dynamic topography are robust, with an average correlation coefficient of 0.74 and 0.71, respectively. The amplitudes are however poorly constrained. For the static component, the considered lithospheric mantle density models result in topographies that differ, on average, 720 m, with peaks reaching 1.7 km. The crustal density models produce variations in isostatic topography averaging 350 m, with peaks of 1 km. For the dynamic component, we obtain peak-to-peak topography amplitude exceeding 3 km for all the tested mantle density and viscosity models. Such values of dynamic topography produce geoid undulations that are not in agreement with observations. Assuming chemical heterogeneities in the lower mantle, in correspondence with the LLSVPs (Large Low Shear wave Velocity Provinces), helps to decrease the amplitudes of dynamic topography and geoid, but reduces the correlation between synthetic and observed geoid. The correlation coefficients between the residual and dynamic topography maps is always less than 0.55. In general, our results indicate that, i) current knowledge of crust density, mantle density and mantle viscosity is still limited, ii) it is important to account for all the various

  11. Modelling the mid-Pliocene Warm Period climate with the IPSL coupled model and its atmospheric component LMDZ5A

    Directory of Open Access Journals (Sweden)

    C. Contoux

    2012-06-01

    Full Text Available This paper describes the experimental design and model results of the climate simulations of the mid-Pliocene Warm Period (mPWP, ca. 3.3–3 Ma using the Institut Pierre Simon Laplace model (IPSLCM5A, in the framework of the Pliocene Model Intercomparison Project (PlioMIP. We use the IPSL atmosphere ocean general circulation model (AOGCM, and its atmospheric component alone (AGCM, to simulate the climate of the mPWP. Boundary conditions such as sea surface temperatures (SSTs, topography, ice-sheet extent and vegetation are derived from the ones imposed by the Pliocene Model Intercomparison Project (PlioMIP, described in Haywood et al. (2010, 2011. We first describe the IPSL model main features, and then give a full description of the boundary conditions used for atmospheric model and coupled model experiments. The climatic outputs of the mPWP simulations are detailed and compared to the corresponding control simulations. The simulated warming relative to the control simulation is 1.94 °C in the atmospheric and 2.07 °C in the coupled model experiments. In both experiments, warming is larger at high latitudes. Mechanisms governing the simulated precipitation patterns are different in the coupled model than in the atmospheric model alone, because of the reduced gradients in imposed SSTs, which impacts the Hadley and Walker circulations. In addition, a sensitivity test to the change of land-sea mask in the atmospheric model, representing a sea-level change from present-day to 25 m higher during the mid-Pliocene, is described. We find that surface temperature differences can be large (several degrees Celsius but are restricted to the areas that were changed from ocean to land or vice versa. In terms of precipitation, impact on polar regions is minor although the change in land-sea mask is significant in these areas.

  12. Real time damage detection using recursive principal components and time varying auto-regressive modeling

    Science.gov (United States)

    Krishnan, M.; Bhowmik, B.; Hazra, B.; Pakrashi, V.

    2018-02-01

    In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using Recursive Principal Component Analysis (RPCA) in conjunction with Time Varying Auto-Regressive Modeling (TVAR) is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal components online using rank-one perturbation method, followed by TVAR modeling of the first transformed response, to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/non-linear-states that indicate damage. Most of the works available in the literature deal with algorithms that require windowing of the gathered data owing to their data-driven nature which renders them ineffective for online implementation. Algorithms focussed on mathematically consistent recursive techniques in a rigorous theoretical framework of structural damage detection is missing, which motivates the development of the present framework that is amenable for online implementation which could be utilized along with suite experimental and numerical investigations. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. TVAR modeling on the principal component explaining maximum variance is utilized and the damage is identified by tracking the TVAR coefficients. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data without requiring any baseline data. Numerical simulations performed on a 5-dof nonlinear system under white noise excitation and El Centro (also known as 1940 Imperial Valley earthquake) excitation, for different damage scenarios, demonstrate the robustness of the proposed algorithm. The method is further validated on results obtained from case studies involving

  13. Application of the Model of Principal Components Analysis on Romanian Insurance Market

    Directory of Open Access Journals (Sweden)

    Dan Armeanu

    2008-06-01

    Full Text Available Principal components analysis (PCA is a multivariate data analysis technique whose main purpose is to reduce the dimension of the observations and thus simplify the analysis and interpretation of data, as well as facilitate the construction of predictive models. A rigorous definition of PCA has been given by Bishop (1995 and it states that PCA is a linear dimensionality reduction technique, which identifies orthogonal directions of maximum variance in the original data, and projects the data into a lower-dimensionality space formed of a sub-set of the highest-variance components. PCA is commonly used in economic research, as well as in other fields of activity. When faced with the complexity of economic and financial processes, researchers have to analyze a large number of variables (or indicators, fact which often proves to be troublesome because it is difficult to collect such a large amount of data and perform calculations on it. In addition, there is a good chance that the initial data is powerfully correlated; therefore, the signification of variables is seriously diminished and it is virtually impossible to establish causal relationships between variables. Researchers thus require a simple, yet powerful annalytical tool to solve these problems and perform a coherent and conclusive analysis. This tool is PCA.The essence of PCA consists of transforming the space of the initial data into another space of lower dimension while maximising the quantity of information recovered from the initial space(1. Mathematically speaking, PCA is a method of determining a new space (called principal component space or factor space onto which the original space of variables can be projected. The axes of the new space (called factor axes are defined by the principal components determined as result of PCA. Principal components (PC are standardized linear combinations (SLC of the original variables and are uncorrelated. Theoretically, the number of PCs equals

  14. Three-dimensional model for multi-component reactive transport with variable density groundwater flow

    Science.gov (United States)

    Mao, X.; Prommer, H.; Barry, D.A.; Langevin, C.D.; Panteleit, B.; Li, L.

    2006-01-01

    PHWAT is a new model that couples a geochemical reaction model (PHREEQC-2) with a density-dependent groundwater flow and solute transport model (SEAWAT) using the split-operator approach. PHWAT was developed to simulate multi-component reactive transport in variable density groundwater flow. Fluid density in PHWAT depends not on only the concentration of a single species as in SEAWAT, but also the concentrations of other dissolved chemicals that can be subject to reactive processes. Simulation results of PHWAT and PHREEQC-2 were compared in their predictions of effluent concentration from a column experiment. Both models produced identical results, showing that PHWAT has correctly coupled the sub-packages. PHWAT was then applied to the simulation of a tank experiment in which seawater intrusion was accompanied by cation exchange. The density dependence of the intrusion and the snow-plough effect in the breakthrough curves were reflected in the model simulations, which were in good agreement with the measured breakthrough data. Comparison simulations that, in turn, excluded density effects and reactions allowed us to quantify the marked effect of ignoring these processes. Next, we explored numerical issues involved in the practical application of PHWAT using the example of a dense plume flowing into a tank containing fresh water. It was shown that PHWAT could model physically unstable flow and that numerical instabilities were suppressed. Physical instability developed in the model in accordance with the increase of the modified Rayleigh number for density-dependent flow, in agreement with previous research. ?? 2004 Elsevier Ltd. All rights reserved.

  15. A three-component analytic model of long-term climate change

    Science.gov (United States)

    Pratt, V. R.

    2011-12-01

    On the premise that fast climate fluctuations up to and including the 11-year solar cycle play a negligible role in long-term climate forecasting, we remove these from the 160-year HADCRUT3 global land-sea temperature record and model the result as the sum of a log-raised-exponential (log(b+exp(t))) and two sine waves of respective periods 56 and 75 years coinciding in phase in 1925. The latter two can be understood equivalently as a 62-year-period "carrier" modulated with a 440-year period that peaked in 1925 and vanished in 1705. This model gives an excellent fit, explaining 98% of the variance (r^2) of long-term climate over the 160 years. We derive the first component as the composition of Arrhenius's 1896 logarithmic dependence of surface temperature on CO2 with Hofmann's 2009 raised-exponential dependence of CO2 on time, but interpret its fit to the data as the net anthropogenic contribution incorporating all greenhouse and aerosol emissions and relevant feedbacks, bearing in mind the rapid growth in both population and technology. The 56-year oscillation matches the largest component of the Atlantic Multidecadal Oscillation, while the 75-year one is near an oscillation often judged to be in the vicinity of 70 years. The expected 1705 cancellation is about two decades earlier than suggested by Gray et al's tree-ring proxy for the AMO during 1567-1990 [Gray GPL 31, L12205]. While there is no consensus on the origin of ocean oscillations, the oscillations in geomagnetic secular variation noted by Nagata and Rimitake in 1963 and Slaucitajs and Winch in 1965, of respective periods 77 years and 61 years, correspond strikingly with our 76-year oscillation and 62-year "carrier." This model has a number of benefits. Simplicity. It is easily explained to a lay audience in response to the frequently voiced concern that the temperature record is poorly correlated with the CO2 record alone. It shows that the transition from natural to anthropogenic influences on long

  16. Simulated spinal cerebrospinal fluid leak repair: an educational model with didactic and technical components.

    Science.gov (United States)

    Ghobrial, George M; Anderson, Paul A; Chitale, Rohan; Campbell, Peter G; Lobel, Darlene A; Harrop, James

    2013-10-01

    In the era of surgical resident work hour restrictions, the traditional apprenticeship model may provide fewer hours for neurosurgical residents to hone technical skills. Spinal dura mater closure or repair is 1 skill that is infrequently encountered, and persistent cerebrospinal fluid leaks are a potential morbidity. To establish an educational curriculum to train residents in spinal dura mater closure with a novel durotomy repair model. The Congress of Neurological Surgeons has developed a simulation-based model for durotomy closure with the ongoing efforts of their simulation educational committee. The core curriculum consists of didactic training materials and a technical simulation model of dural repair for the lumbar spine. Didactic pretest scores ranged from 4/11 (36%) to 10/11 (91%). Posttest scores ranged from 8/11 (73%) to 11/11 (100%). Overall, didactic improvements were demonstrated by all participants, with a mean improvement between pre- and posttest scores of 1.17 (18.5%; P = .02). The technical component consisted of 11 durotomy closures by 6 participants, where 4 participants performed multiple durotomies. Mean time to closure of the durotomy ranged from 490 to 546 seconds in the first and second closures, respectively (P = .66), whereby the median leak rate improved from 14 to 7 (P = .34). There were also demonstrative technical improvements by all. Simulated spinal dura mater repair appears to be a potentially valuable tool in the education of neurosurgery residents. The combination of a didactic and technical assessment appears to be synergistic in terms of educational development.

  17. Prediction of peanut protein solubility based on the evaluation model established by supervised principal component regression.

    Science.gov (United States)

    Wang, Li; Liu, Hongzhi; Liu, Li; Wang, Qiang; Li, Shurong; Li, Qizhai

    2017-03-01

    Supervised principal component regression (SPCR) analysis was adopted to establish the evaluation model of peanut protein solubility. Sixty-six peanut varieties were analysed in the present study. Results showed there was intimate correlation between protein solubility and other indexes. At 0.05 level, these 11 indexes, namely crude fat, crude protein, total sugar, cystine, arginine, conarachin I, 37.5kDa, 23.5kDa, 15.5kDa, protein extraction rate, and kernel ratio, were correlated with protein solubility and were extracted to for establishing the SPCR model. At 0.01 level, a simper model was built between the four indexes (crude protein, cystine, conarachin I, and 15.5kDa) and protein solubility. Verification results showed that the coefficients between theoretical and experimental values were 0.815 (psolubility effectively. The application of models was more convenient and efficient than traditional determination method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    Science.gov (United States)

    Zhu, Qing; Zou, Yingchao; Lai, Kin Keung

    2014-01-01

    As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations. PMID:25061614

  19. Modelling of the vertical migration process of phosphogypsum components in the soil profile

    Directory of Open Access Journals (Sweden)

    Chernysh Ye. Yu.

    2017-12-01

    Full Text Available This paper focuses on the study of the process of vertical migration of phosphogypsum components according to the soil profile. The qualitative and quantitative identification of main biogenic elements (phosphorus, sulphur, calcium etc and heavy metals in lysimetric solutions from various horizons while getting on the surface of soil solutions containing phosphogypsum components is carried out by means of designed laboratory and experimental complex. The mineral hard soil fraction is also analysed. According to the results of the X-ray diffractometrical researches, the carbonates with heavy metals in their structure, caused by the ion-exchange with Са2+, were found in the mineral structure of the illuvial horizon soil samples. The results of experimental modeling indicate significant changes in the chemical parameters of groundwater, which are obtained by passing water with phosphogypsum particles on a model soil profile, which makes it easy to track the input data. In the upper part of the profile after 1 000 hours and for the first speed of the infiltration process, the constant moisture level was 25,6%, after the second speed of infiltration, it rose to 29.1 %. Noted that the highest concentration of biogenic elements (calcium, sulfur, potassium was found in lysimetric solutions obtained from the humus and eluvial horizons. In addition, it is determined that iron is present up to 5 %, nickel – within the range of 1–3 %, and copper – up to 1 %. It should be noted that the biochemical transformations of silicon influence the fractional distribution of heavy metals, which can be fixed by sorption-sedimentation mechanisms in silica, oligo and polysilicon compounds, as well as in crystalline lattice structures of clay minerals, quartz, etc. The model of soil and geochemical situation was formed according to the soil profile under the influence of the phosphogypsum within the three-dimensional surface, developed with the help of the

  20. Patterns and individual-based modeling of spatial competition within two main components of Neotropical mangrove ecosystems.

    OpenAIRE

    Piou, Cyril

    2007-01-01

    The main focus of the thesis was to look at the role of competition in shaping the spatial organization of two main components of Neotropical mangrove ecosystems. The first component was the Caribbean mangrove tree community. The second was populations of an ecologically and economically important mangrove burrowing crab in North-Brazil: Ucides cordatus. These two components were studied in two respective parts with one field and two individual-based modeling studies each. The two individual-...

  1. Specific features of modelling rules of monetary policy on the basis of hybrid regression models with a neural component

    Directory of Open Access Journals (Sweden)

    Lukianenko Iryna H.

    2014-01-01

    Full Text Available The article considers possibilities and specific features of modelling economic phenomena with the help of the category of models that unite elements of econometric regressions and artificial neural networks. This category of models contains auto-regression neural networks (AR-NN, regressions of smooth transition (STR/STAR, multi-mode regressions of smooth transition (MRSTR/MRSTAR and smooth transition regressions with neural coefficients (NCSTR/NCSTAR. Availability of the neural network component allows models of this category achievement of a high empirical authenticity, including reproduction of complex non-linear interrelations. On the other hand, the regression mechanism expands possibilities of interpretation of the obtained results. An example of multi-mode monetary rule is used to show one of the cases of specification and interpretation of this model. In particular, the article models and interprets principles of management of the UAH exchange rate that come into force when economy passes from a relatively stable into a crisis state.

  2. A multi-component and multi-failure mode inspection model based on the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  3. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  4. Investigating Effective Components of Higher Education Marketing and Providing a Marketing Model for Iranian Private Higher Education Institutions

    Science.gov (United States)

    Kasmaee, Roya Babaee; Nadi, Mohammad Ali; Shahtalebi, Badri

    2016-01-01

    Purpose: The purpose of this paper is to study and identify the effective components of higher education marketing and providing a marketing model for Iranian higher education private sector institutions. Design/methodology/approach: This study is a qualitative research. For identifying the effective components of higher education marketing and…

  5. Atomistic modeling of the structural components of the blood-brain barrier

    Science.gov (United States)

    Glukhova, O. E.; Grishina, O. A.; Slepchenkov, M. M.

    2015-03-01

    Blood-brain barrier, which is a barrage system between the brain and blood vessels, plays a key role in the "isolation" of the brain of unnecessary information, and reduce the "noise" in the interneuron communication. It is known that the barrier function of the BBB strictly depends on the initial state of the organism and changes significantly with age and, especially in developing the "vascular accidents". Disclosure mechanisms of regulation of the barrier function will develop new ways to deliver neurotrophic drugs to the brain in the newborn. The aim of this work is the construction of atomistic models of structural components of the blood-brain barrier to reveal the mechanisms of regulation of the barrier function.

  6. A methodology to model physical contact between structural components in NASTRAN

    Science.gov (United States)

    Prabhu, Annappa A.

    1993-01-01

    Two components of a structure which are located side by side, will come in contact by certain force and will transfer the compressive force along the contact area. If the force acts in the opposite direction, the elements will separate and no force will be transferred. If this contact is modeled, the load path will be correctly represented, and the load redistribution results in more realistic stresses in the structure. This is accomplished by using different sets of rigid elements for different loading conditions, or by creating multipoint constraint sets. Comparison of these two procedures is presented for a 4 panel unit (PU) stowage drawer installed in an experiment rack in the Spacelab Life Sciences (SLS-2) payload.

  7. Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing

    Science.gov (United States)

    Ordaz, Irian

    2011-01-01

    Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.

  8. On the distribution of the stochastic component in SUE traffic assignment models

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    The paper discuss the use of different distributions of the stochastic component in SUE. A main conclusion is that they generally gave reasonable similar results, except for the LogNormal distribution which use is dissuaded. However, in cases with low link-costs (e.g. in dense urban areas, ramps...... and modelling of intersections and inter-changes), distributions with long tails (Gumbel and Normal) gave biased results com-pared with the Rectangular distribution. The Triangular distribution gave results somewhat between. Besides giving the most reasonable results, the Rectangular dis-tribution is the most...... calculation effective.All distributions gave a unique solution at link level after a sufficient large number of iterations (up to 1,000 at full-scale networks) while the usual aggregated measures of convergence converged quite fast (under 50 iterations). The tests also showed, that the distributions must...

  9. Local Prediction Models on Mid-Atlantic Ridge MORB by Principal Component Regression

    Science.gov (United States)

    Ling, X.; Snow, J. E.; Chin, W.

    2017-12-01

    The isotopic compositions of the daughter isotopes of long-lived radioactive systems (Sr, Nd, Hf and Pb ) can be used to map the scale and history of mantle heterogeneities beneath mid-ocean ridges. Our goal is to relate the multidimensional structure in the existing isotopic dataset with an underlying physical reality of mantle sources. The numerical technique of Principal Component Analysis is useful to reduce the linear dependence of the data to a minimum set of orthogonal eigenvectors encapsulating the information contained (cf Agranier et al 2005). The dataset used for this study covers almost all the MORBs along mid-Atlantic Ridge (MAR), from 54oS to 77oN and 8.8oW to -46.7oW, including replicating the dataset of Agranier et al., 2005 published plus 53 basalt samples dredged and analyzed since then (data from PetDB). The principal components PC1 and PC2 account for 61.56% and 29.21%, respectively, of the total isotope ratios variability. The samples with similar compositions to HIMU and EM and DM are identified to better understand the PCs. PC1 and PC2 are accountable for HIMU and EM whereas PC2 has limited control over the DM source. PC3 is more strongly controlled by the depleted mantle source than PC2. What this means is that all three principal components have a high degree of significance relevant to the established mantle sources. We also tested the relationship between mantle heterogeneity and sample locality. K-means clustering algorithm is a type of unsupervised learning to find groups in the data based on feature similarity. The PC factor scores of each sample are clustered into three groups. Cluster one and three are alternating on the north and south MAR. Cluster two exhibits on 45.18oN to 0.79oN and -27.9oW to -30.40oW alternating with cluster one. The ridge has been preliminarily divided into 16 sections considering both the clusters and ridge segments. The principal component regression models the section based on 6 isotope ratios and PCs. The

  10. FEM modelling of shoes insoles components for standing and walking simulation

    Directory of Open Access Journals (Sweden)

    Braun Barbu

    2017-01-01

    Full Text Available The paper presents a research stage in which a new method applied for foot insoles components rapid prototyping was successfully tested in case of two young persons with small orthopaedic diseases. The research in this stage is focused on the FEA model analysis before and after prototyping, the model consisting in two sets of specific items to be inserted into the plantar supporters, with orthopaedic correction role. The simulation and testing was performed in condition of wearing shoes containing such of plantar supporters, when standing and walking, these situations being the most common. The main studied problem was to verify if the CAD modelled orthotic items to be prototyped should resist in case of static and dynamic loads, similar to those found in case of standing and walking. It was demonstrated a good correlation in terms of testing results before and after items prototyping, especially for the second person. Besides, it was demonstrated that the FEA analysis applied method could be successfully used to verify the prototyped orthotic items endurance and resistance.

  11. Integrated approaches to modeling the organic and inorganic atmospheric aerosol components

    Science.gov (United States)

    Koo, Bonyoung; Ansari, Asif S.; Pandis, Spyros N.

    A series of modeling approaches for the description of the dynamic behavior of secondary organic aerosol (SOA) components and their interactions with inorganics is presented. The models employ a lumped species approach based on available smog chamber studies and the UNIquac Functional-group Activity Coefficient (UNIFAC) method to estimate SOA water absorption. The additional water due to SOA species can change the partitioning behavior of the semi-volatile inorganics. Primary organic particles significantly influence the SOA partitioning between gas and aerosol phases. The SOA size distribution predicted by a bulk equilibrium approach is biased toward smaller sizes compared with that of a fully dynamic model. An improved weighting scheme for the bulk equilibrium approach is proposed in this work and is shown to minimize this discrepancy. SOA is predicted to increase the total aerosol water in Southern California by 2-13% depending on conditions. However, the effect of SOA water absorption on aerosol nitrate is insignificant for all the cases studied in Southern California.

  12. A Four-Component Model of Sexual Orientation & Its Application to Psychotherapy.

    Science.gov (United States)

    Bowins, Brad

    Distress related to sexual orientation is a common focus in psychotherapy. In some instances the distress is external in nature as with persecution, and in others it is internal as with self-acceptance issues. Complicating matters, sexual orientation is a very complex topic producing a great deal of confusion for both clients and therapists. The current paper provides a four component model-sexual orientation dimensions, activation of these dimensions, the role of erotic fantasy, and social construction of sexual orientation-that in combination provide a comprehensive perspective. Activation of dimensions is a novel contribution not proposed in any other model. With improved understanding of sexual orientation issues, and utilization of this knowledge to guide interventions, psychotherapists can improve outcomes with their clients. Also described is how dimensions of sexual orientation relate to transgender. In addition to improving psychotherapy outcomes, the fourcomponent model presented can help reduce discrimination and persecution, by demonstrating that the capacity for both homoerotic and heteroerotic behavior is universal.

  13. Thermodynamically consistent modeling and simulation of multi-component two-phase flow with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2017-12-09

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive alternative recently over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of multiple fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.

  14. Microglia Morphological Categorization in a Rat Model of Neuroinflammation by Hierarchical Cluster and Principal Components Analysis.

    Science.gov (United States)

    Fernández-Arjona, María Del Mar; Grondona, Jesús M; Granados-Durán, Pablo; Fernández-Llebrez, Pedro; López-Ávalos, María D

    2017-01-01

    It is known that microglia morphology and function are closely related, but only few studies have objectively described different morphological subtypes. To address this issue, morphological parameters of microglial cells were analyzed in a rat model of aseptic neuroinflammation. After the injection of a single dose of the enzyme neuraminidase (NA) within the lateral ventricle (LV) an acute inflammatory process occurs. Sections from NA-injected animals and sham controls were immunolabeled with the microglial marker IBA1, which highlights ramifications and features of the cell shape. Using images obtained by section scanning, individual microglial cells were sampled from various regions (septofimbrial nucleus, hippocampus and hypothalamus) at different times post-injection (2, 4 and 12 h). Each cell yielded a set of 15 morphological parameters by means of image analysis software. Five initial parameters (including fractal measures) were statistically different in cells from NA-injected rats (most of them IL-1β positive, i.e., M1-state) compared to those from control animals (none of them IL-1β positive, i.e., surveillant state). However, additional multimodal parameters were revealed more suitable for hierarchical cluster analysis (HCA). This method pointed out the classification of microglia population in four clusters. Furthermore, a linear discriminant analysis (LDA) suggested three specific parameters to objectively classify any microglia by a decision tree. In addition, a principal components analysis (PCA) revealed two extra valuable variables that allowed to further classifying microglia in a total of eight sub-clusters or types. The spatio-temporal distribution of these different morphotypes in our rat inflammation model allowed to relate specific morphotypes with microglial activation status and brain location. An objective method for microglia classification based on morphological parameters is proposed. Main points Microglia undergo a quantifiable

  15. Principal component articial neural network calibration models for the simultaneous spectrophotometric estimation of mefenamic acid and paracetamol in tablets

    Directory of Open Access Journals (Sweden)

    RAJAPPAN MANAVALAN

    2006-11-01

    Full Text Available Simultaneous estimation of all drug components in a multicomponent analgesic dosage form with artificial neural networks calibration models using UV spectrophotometry is reported as a simple alternative to using separate models for each component. Anovel approach for calibration using a compund spectral dataset derived from three spectra of each component is described. The spectra of mefenamic acid and paracetamol were recorded as several concentrations within their linear range and used to compute a calibration mixture between the wavelengths 220 to 340 nm. Neural networks trained by a Levenberg–Marquardt algorithm were used for building and optimizing the calibration models using MATALAB® Neural Network Toolbox and were compared with the principal component regression model. The calibration models were throughly evaluated at several concentration levels using 104 spectra obtained for 52 synthetic binary mixtures prepared using orthogonal designs. The optimized model showed sufficient robustness even when the calibration sets were constructed from a different set of pure spectra of the components. The simultaneous prediction of both components by a single neural netwook with the suggested calibration approach was successful. The model could accurately estimate the drugs, with satisfactory precision and accuracy, in tablet dosage with no interference from excipients as indicated by the results of a recovery study.

  16. Physics-Based Stress Corrosion Cracking Component Reliability Model cast in an R7-Compatible Cumulative Damage Framework

    Energy Technology Data Exchange (ETDEWEB)

    Unwin, Stephen D.; Lowry, Peter P.; Layton, Robert F.; Toloczko, Mychailo B.; Johnson, Kenneth I.; Sanborn, Scott E.

    2011-07-01

    This is a working report drafted under the Risk-Informed Safety Margin Characterization pathway of the Light Water Reactor Sustainability Program, describing statistical models of passives component reliabilities.

  17. Rubber airplane: Constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    Science.gov (United States)

    Kolb, Mark A.

    1990-01-01

    Viewgraphs on Rubber Airplane: Constraint-based Component-Modeling for Knowledge Representation in Computer Aided Conceptual Design are presented. Topics covered include: computer aided design; object oriented programming; airfoil design; surveillance aircraft; commercial aircraft; aircraft design; and launch vehicles.

  18. Using Meta-modeling in Design and Implementation of Component-based Systems: The SOFA Case-study

    Czech Academy of Sciences Publication Activity Database

    Hnětynka, P.; Plášil, František

    2011-01-01

    Roč. 41, č. 11 (2011), s. 1185-1201 ISSN 0038-0644 Grant - others:GA ČR(CZ) GA201/08/0266 Keywords : software architectures * software components * model-driven development * meta-models * model transformation * ADL Subject RIV: JC - Computer Hardware ; Software Impact factor: 0.519, year: 2011

  19. Four simultaneous component models for the analysis of multivariate time series from more than one subject to model intraindividual and interindividual differences

    NARCIS (Netherlands)

    Timmerman, Mariek E.; Kiers, Henk A.L.

    A class of four simultaneous component models for the exploratory analysis of multivariate time series collected from more than one subject simultaneously is discussed. In each of the models, the multivariate time series of each subject is decomposed into a few series of component scores and a

  20. Assessing adsorption of polycyclic aromatic hydrocarbons on Rhizopus oryzae cell wall components with water-methanol cosolvent model.

    Science.gov (United States)

    Ma, Bin; Lv, Xiaofei; He, Yan; Xu, Jianming

    2016-03-01

    The contribution of different fungal cell wall components in adsorption of polycyclic aromatic hydrocarbons (PAHs) is still unclear. We isolated Rhizopus oryzae cell walls components with sequential extraction, characterized functional groups with NEXAFS spectra, and determined partition coefficients of PAHs on cell walls and cell wall components with cosolvent model. Spectra of NEXAFS indicated that isolated cell walls components were featured with peaks at ~532.7 and ~534.5eV energy. The lipid cosolvent partition coefficients were approximately one order of magnitude higher than the corresponding carbohydrate cosolvent partition coefficients. The partition coefficients for four tested carbohydrates varied at approximate 0.5 logarithmic units. Partition coefficients between biosorbents and water calculated based cosolvent models ranged from 0.8 to 4.2. The present study proved the importance of fungal cell wall components in adsorption of PAHs, and consequently the role of fungi in PAHs bioremediation. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Model-Based Sensor Placement for Component Condition Monitoring and Fault Diagnosis in Fossil Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mobed, Parham [Texas Tech Univ., Lubbock, TX (United States); Pednekar, Pratik [West Virginia Univ., Morgantown, WV (United States); Bhattacharyya, Debangsu [West Virginia Univ., Morgantown, WV (United States); Turton, Richard [West Virginia Univ., Morgantown, WV (United States); Rengaswamy, Raghunathan [Texas Tech Univ., Lubbock, TX (United States)

    2016-01-29

    Design and operation of energy producing, near “zero-emission” coal plants has become a national imperative. This report on model-based sensor placement describes a transformative two-tier approach to identify the optimum placement, number, and type of sensors for condition monitoring and fault diagnosis in fossil energy system operations. The algorithms are tested on a high fidelity model of the integrated gasification combined cycle (IGCC) plant. For a condition monitoring network, whether equipment should be considered at a unit level or a systems level depends upon the criticality of the process equipment, its likeliness to fail, and the level of resolution desired for any specific failure. Because of the presence of a high fidelity model at the unit level, a sensor network can be designed to monitor the spatial profile of the states and estimate fault severity levels. In an IGCC plant, besides the gasifier, the sour water gas shift (WGS) reactor plays an important role. In view of this, condition monitoring of the sour WGS reactor is considered at the unit level, while a detailed plant-wide model of gasification island, including sour WGS reactor and the Selexol process, is considered for fault diagnosis at the system-level. Finally, the developed algorithms unify the two levels and identifies an optimal sensor network that maximizes the effectiveness of the overall system-level fault diagnosis and component-level condition monitoring. This work could have a major impact on the design and operation of future fossil energy plants, particularly at the grassroots level where the sensor network is yet to be identified. In addition, the same algorithms developed in this report can be further enhanced to be used in retrofits, where the objectives could be upgrade (addition of more sensors) and relocation of existing sensors.

  2. Analysis and Modeling of the Galvanic Skin Response Spontaneous Component in the context of Intelligent Biofeedback Systems Development

    Science.gov (United States)

    Unakafov, A.

    2009-01-01

    The paper presents an approach to galvanic skin response (GSR) spontaneous component analysis and modeling. In the study a classification of biofeedback training methods is given, importance of intelligent methods development is shown. The INTENS method, which is perspective for intellectualization, is presented. An important problem of biofeedback training method intellectualization - estimation of the GSR spontaneous component - is solved in the main part of the work. Its main characteristics are described; results of GSR spontaneous component modeling are shown. Results of small research of an optimum material for GSR probes are presented.

  3. Component characterization and predictive modeling for green roof substrates optimized to adsorb P and improve runoff quality: A review.

    Science.gov (United States)

    Jennett, Tyson S; Zheng, Youbin

    2017-12-14

    This review is a synthesis of the current knowledge regarding the effects of green roof substrate components and their retentive capacity for nutrients, particularly phosphorus (P). Substrates may behave as either sources or sinks of P depending on the components they are formulated from, and to date, the total P-adsorbing capacity of a substrate has not been quantified as the sum of the contributions of its components. Few direct links have been established among substrate components and their physicochemical characteristics that would affect P-retention. A survey of recent literature presented herein highlights the trends within individual component selection (clays and clay-like material, organics, conventional soil and sands, lightweight inorganics, and industrial wastes and synthetics) for those most common during substrate formulation internationally. Component selection will vary with respect to ease of sourcing component materials, cost of components, nutrient-retention capacity, and environmental sustainability. However, the number of distinct components considered for inclusion in green roof substrates continues to expand, as the desires of growers, material suppliers, researchers and industry stakeholders are incorporated into decision-making. Furthermore, current attempts to characterize the most often used substrate components are also presented whereby runoff quality is correlated to entire substrate performance. With the use of well-described characterization (constant capacitance model) and modeling techniques (the soil assemblage model), it is proposed that substrates optimized for P adsorption may be developed through careful selection of components with prior knowledge of their chemical properties, that may increase retention of P in plant-available forms, thereby reducing green roof fertilizer requirements and P losses in roof runoff. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  5. Trajectory modeling of gestational weight: A functional principal component analysis approach.

    Directory of Open Access Journals (Sweden)

    Menglu Che

    Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.

  6. The dynamical role of the central molecular ring within the framework of a seven-component Galaxy model

    Science.gov (United States)

    Simin, A. A.; Fridman, A. M.; Haud, U. A.

    1991-09-01

    A Galaxy model in which the surface density of the gas component has a sharp (two orders of magnitude) jump in the region of the outer radius of the molecular ring is constructed on the basis of observational data. This model is used to calculate the contributions of each population to the model curve of Galactic rotation. The value of the dimensionless increment of hydrodynamical instability for the gas component, being much less than 1, coincides with a similar magnitude for the same gas in the gravity field of the entire Galaxy. It is concluded that the unstable gas component of the Galaxy lies near the limit of the hydrodynamical instability, which is in accordance with the Le Chatelier principle. The stellar populations of the Galaxy probably do not affect the generation of the spiral structure in the gaseous component.

  7. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    Science.gov (United States)

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  8. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    Directory of Open Access Journals (Sweden)

    Árni Gunnar Ásgeirsson

    Full Text Available Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets as possible, while ignoring digit (distractors. Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990 Theory of Visual Attention (TVA, was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  9. Studies of Westward Electrojets and Field-Aligned Currents in the Magnetotail During Substorms: Implications for Magnetic Field Models

    Science.gov (United States)

    Spence, Harlan E.

    1996-01-01

    discrete features in the context of the global picture. We reported on our initial study at national and international meetings and published the results of our predictions of the low-altitude signatures of the plasma sheet. In addition, the PI was invited to contribute a publication to the so-called 'Great Debate in Space Physics' series that is a feature of EOS. The topic was on the nature of magnetospheric substorms. Specific questions of the when and where a substorm occurs and the connection between the auroral and magnetospheric components were discussed in that paper. This paper therefore was derived exclusively from the research supported by this grant. Attachment: Empirical modeling of the quite time nightside magnetosphere.' 'CRRES observations of particle flux dropout event.' The what, where, when, and why of magnetospheric substorm triggers'. and 'Low altitude signature of the plasma sheet: model prediction of local time dependence'.

  10. The Community Land Model and Its Climate Statistics as a Component of the Community Climate System Model

    Energy Technology Data Exchange (ETDEWEB)

    Dickinson, Robert E. [Georgia Institute of Technology; Oleson, Keith [National Center for Atmospheric Research (NCAR); Bonan, Gordon [National Center for Atmospheric Research (NCAR); Hoffman, Forrest M [ORNL; Thornton, Peter [National Center for Atmospheric Research (NCAR); Vertenstein, Mariana [National Center for Atmospheric Research (NCAR); Yang, Zong-Liang [University of Texas, Austin; Zeng, Xubin [University of Arizona

    2006-01-01

    Several multidecadal simulations have been carried out with the new version of the Community Climate System Model (CCSM). This paper reports an analysis of the land component of these simulations. Global annual averages over land appear to be within the uncertainty of observational datasets, but the seasonal cycle over land of temperature and precipitation appears to be too weak. These departures from observations appear to be primarily a consequence of deficiencies in the simulation of the atmospheric model rather than of the land processes. High latitudes of northern winter are biased sufficiently warm to have a significant impact on the simulated value of global land temperature. The precipitation is approximately doubled from what it should be at some locations, and the snowpack and spring runoff are also excessive. The winter precipitation over Tibet is larger than observed. About two-thirds of this precipitation is sublimated during the winter, but what remains still produces a snowpack that is very large compared to that observed with correspondingly excessive spring runoff. A large cold anomaly over the Sahara Desert and Sahel also appears to be a consequence of a large anomaly in downward longwave radiation; low column water vapor appears to be most responsible. The modeled precipitation over the Amazon basin is low compared to that observed, the soil becomes too dry, and the temperature is too warm during the dry season.

  11. Thermodynamically consistent modeling and simulation of multi-component two-phase flow model with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2016-11-25

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest alternative over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of two fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.

  12. A kinetic model for impact/sliding wear of pressurized water reactor internal components. Application to rod cluster control assemblies

    International Nuclear Information System (INIS)

    Zbinden, M.; Durbec, V.

    1996-12-01

    A new concept of industrial wear model adapted to components of nuclear plants is proposed. Its originality is to be supported, on one hand, by experimental results obtained via wear machines of relatively short operational times, and, on the other hand, by the information obtained from the operating feedback over real wear kinetics of the reactors components. The proposed model is illustrated by an example which corresponds to a specific real situation. The determination of the coefficients permitting to cover all assembly of configurations and the validation of the model in these configurations have been the object of the most recent work. (author)

  13. Modeling the survival responses of a multi-component biofilm to environmental stress

    Science.gov (United States)

    Carles Brangarí, Albert; Manzoni, Stefano; Sanchez-Vila, Xavier; Fernàndez-Garcia, Daniel

    2017-04-01

    Biofilms are consortia of microorganisms embedded in self-produced matrices of biopolymers. The survival of such communities depends on their capacity to improve the environmental conditions of their habitat by mitigating, or even benefitting from some adverse external factors. The mechanisms by which the microbial habitat is regulated remain mostly unknown. However, many studies have reported physiological responses to environmental stresses that include the release of extracellular polymeric substances (EPS) and the induction of a dormancy state. A sound understanding of these capacities is required to enhance the knowledge of the microbial dynamics in soils and its potential role in the carbon cycle, with significant implications for the degradation of contaminants and the emission of greenhouse gases, among others. We present a numerical analysis of the dynamics of soil microbes and their responses to environmental stresses. The conceptual model considers a multi-component heterotrophic biofilm made up of active cells, dormant cells, EPS, and extracellular enzymes. Biofilm distribution and properties are defined at the pore-scale and used to determine nutrient availability and water saturation via feedbacks of biofilm on soil hydraulic properties. The pore space micro-habitat is modeled as a simplified pore-network of cylindrical tubes in which biofilms proliferate. Microbial compartments and most of the carbon fluxes are defined at the bulk level. Microbial processes include the synthesis, decay and detachment of biomass, the activation/deactivation of cells, and the release and reutilization of EPS. Results suggest that the release of EPS and the capacity to enter a dormant state offer clear evolutionary advantages in scenarios characterized by environmental stress. On the contrary, when the conditions are favorable, the diversion of carbon into the production of the aforementioned survival mechanisms does not confer any additional benefit and the population

  14. Mathematical modelling of ultrasonic testing of components with defects close to a non-planar surface

    International Nuclear Information System (INIS)

    Westlund, Jonathan; Bostroem, Anders

    2011-05-01

    Nondestructive testing with ultrasound is a standard procedure in the nuclear power industry. To develop and qualify the methods extensive experimental work with test blocks is usually required. This can be very time-consuming and costly and it also requires a good physical intuition of the situation. A reliable mathematical model of the testing situation can, therefore, be very valuable and cost-effective as it can reduce experimental work significantly. A good mathematical model enhances the physical intuition and is very useful for parametric studies, as a pedagogical tool, and for the qualification of procedures and personnel. The aim of the present report is to describe work that has been performed to model ultrasonic testing of components that contain a defect close to a nonplanar surface. For nuclear power applications this may be a crack or other defect on the inside of a pipe with a diameter change or connection. This is an extension of the computer program UTDefect, which previously only admits a planar back surface (which is often applicable also to pipes if the pipe diameter is large enough). The problems are investigated in both 2D and 3D, and in 2D both the simpler anti-plane (SH) and the in-plane (P-SV) problem are studied. The 2D investigations are primarily solved to get a 'feeling' for the solution procedure, the discretizations, etc. In all cases an integral equation approach with a Green's function in the kernel is taken. The nonplanar surface is treated by the boundary element method (BEM) where a division of the surface is made in small elements. The defects are mainly cracks, strip-like (in 2D) or rectangular (in 3D), and these are treated with more analytical methods. In 2D also more general defects are treated with the help of their transition (T) matrix. As in other parts of UTDefect the ultrasonic probes in transmission and reception are included in the model. In 3D normalization by a side drilled hole is possible. Some numerical results

  15. Damage Detection of Refractory Based on Principle Component Analysis and Gaussian Mixture Model

    Directory of Open Access Journals (Sweden)

    Changming Liu

    2018-01-01

    Full Text Available Acoustic emission (AE technique is a common approach to identify the damage of the refractories; however, there is a complex problem since there are as many as fifteen involved parameters, which calls for effective data processing and classification algorithms to reduce the level of complexity. In this paper, experiments involving three-point bending tests of refractories were conducted and AE signals were collected. A new data processing method of merging the similar parameters in the description of the damage and reducing the dimension was developed. By means of the principle component analysis (PCA for dimension reduction, the fifteen related parameters can be reduced to two parameters. The parameters were the linear combinations of the fifteen original parameters and taken as the indexes for damage classification. Based on the proposed approach, the Gaussian mixture model was integrated with the Bayesian information criterion to group the AE signals into two damage categories, which accounted for 99% of all damage. Electronic microscope scanning of the refractories verified the two types of damage.

  16. Rapid cultivar identification of barley seeds through disjoint principal component modeling.

    Science.gov (United States)

    Whitehead, Iain; Munoz, Alicia; Becker, Thomas

    2017-01-01

    Classification of barley varieties is a crucial part of the control and assessment of barley seeds especially for the malting and brewing industry. The correct classification of barley is essential in that a majority of decisions made regarding process specifications, economic considerations, and the type of product produced with the cereal are made based on the barley variety itself. This fact combined with the need to promptly assess the cereal as it is delivered to a malt house or production facility creates the need for a technique to quickly identify a barley variety based on a sample. This work explores the feasibility of differentiating between barley varieties based on the protein spectrum of barley seeds. In order to produce a rapid analysis of the protein composition of the barley seeds, lab-on-a-chip micro fluid technology is used to analyze the protein composition. Classification of the barley variety is then made using disjoint principle component models. This work included 19 different barley varieties. The varieties consisted of both winter and summer barley types. In this work, it is demonstrated that this system can identify the most likely barley variety with an accuracy of 95.9% based on cross validation and can screen summer barley with an accuracy of 95.2% and a false positive rate of 0.0% based on cross validation. This demonstrates the feasibility of the method to provide a rapid and relatively inexpensive method to verify the heritage of barley seeds.

  17. Microclimatic models. Estimation of components of the energy balance over land surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Heikinheimo, M.; Venaelaeinen, A.; Tourula, T. [Finnish Meteorological Inst., Helsinki (Finland). Air Quality Dept.

    1996-12-31

    Climates at regional scale are strongly dependent on the interaction between atmosphere and its lower boundary, the oceans and the land surface mosaic. Land surfaces influence climate through their albedo, and the aerodynamic roughness, the processes of the biosphere and many soil hydrological properties; all these factors vary considerably geographically. Land surfaces receive a certain portion of the solar irradiance depending on the cloudiness, atmospheric transparency and surface albedo. Short-wave solar irradiance is the source of the heat energy exchange at the earth`s surface and also regulates many biological processes, e.g. photosynthesis. Methods for estimating solar irradiance, atmospheric transparency and surface albedo were reviewed during the course of this project. The solar energy at earth`s surface is consumed for heating the soil and the lower atmosphere. Where moisture is available, evaporation is one of the key components of the surface energy balance, because the conversion of liquid water into water vapour consumes heat. The evaporation process was studied by carrying out field experiments and testing parameterisation for a cultivated agricultural surface and for lakes. The micrometeorological study over lakes was carried out as part of the international `Northern Hemisphere Climatic Processes Experiment` (NOPEX/BAHC) in Sweden. These studies have been aimed at a better understanding of the energy exchange processes of the earth`s surface-atmosphere boundary for a more accurate and realistic parameterisation of the land surface in atmospheric models

  18. Nuclear fuel cycle system simulation tool based on high-fidelity component modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ames, David E.,

    2014-02-01

    The DOE is currently directing extensive research into developing fuel cycle technologies that will enable the safe, secure, economic, and sustainable expansion of nuclear energy. The task is formidable considering the numerous fuel cycle options, the large dynamic systems that each represent, and the necessity to accurately predict their behavior. The path to successfully develop and implement an advanced fuel cycle is highly dependent on the modeling capabilities and simulation tools available for performing useful relevant analysis to assist stakeholders in decision making. Therefore a high-fidelity fuel cycle simulation tool that performs system analysis, including uncertainty quantification and optimization was developed. The resulting simulator also includes the capability to calculate environmental impact measures for individual components and the system. An integrated system method and analysis approach that provides consistent and comprehensive evaluations of advanced fuel cycles was developed. A general approach was utilized allowing for the system to be modified in order to provide analysis for other systems with similar attributes. By utilizing this approach, the framework for simulating many different fuel cycle options is provided. Two example fuel cycle configurations were developed to take advantage of used fuel recycling and transmutation capabilities in waste management scenarios leading to minimized waste inventories.

  19. Discrete Boltzmann modeling of Rayleigh-Taylor instability in two-component compressible flows

    Science.gov (United States)

    Lin, Chuandong; Xu, Aiguo; Zhang, Guangcai; Luo, Kai Hong; Li, Yingjun

    2017-11-01

    A discrete Boltzmann model (DBM) is proposed to probe the Rayleigh-Taylor instability (RTI) in two-component compressible flows. Each species has a flexible specific-heat ratio and is described by one discrete Boltzmann equation (DBE). Independent discrete velocities are adopted for the two DBEs. The collision and force terms in the DBE account for the molecular collision and external force, respectively. Two types of force terms are exploited. In addition to recovering the modified Navier-Stokes equations in the hydrodynamic limit, the DBM has the capability of capturing detailed nonequilibrium effects. Furthermore, we use the DBM to investigate the dynamic process of the RTI. The invariants of tensors for nonequilibrium effects are presented and studied. For low Reynolds numbers, both global nonequilibrium manifestations and the growth rate of the entropy of mixing show three stages (i.e., the reducing, increasing, and then decreasing trends) in the evolution of the RTI. On the other hand, the early reducing tendency is suppressed and even eliminated for high Reynolds numbers. Relevant physical mechanisms are analyzed and discussed.

  20. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for

  1. Obtaining manufactured geometries of deep-drawn components through a model updating procedure using geometric shape parameters

    Science.gov (United States)

    Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan

    2018-01-01

    The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.

  2. Computer-aided process planning in prismatic shape die components based on Standard for the Exchange of Product model data

    Directory of Open Access Journals (Sweden)

    Awais Ahmad Khan

    2015-11-01

    Full Text Available Insufficient technologies made good integration between the die components in design, process planning, and manufacturing impossible in the past few years. Nowadays, the advanced technologies based on Standard for the Exchange of Product model data are making it possible. This article discusses the three main steps for achieving the complete process planning for prismatic parts of the die components. These three steps are data extraction, feature recognition, and process planning. The proposed computer-aided process planning system works as part of an integrated system to cover the process planning of any prismatic part die component. The system is built using Visual Basic with EWDraw system for visualizing the Standard for the Exchange of Product model data file. The system works successfully and can cover any type of sheet metal die components. The case study discussed in this article is taken from a large design of progressive die.

  3. SR-Site Data report. THM modelling of buffer, backfill and other system components

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Boergesson, Lennart; Kristensson, Ola

    2010-03-01

    This report is a supplement to the SR-Site data report. Based on the issues raised in the Process reports concerning THM processes in buffer, backfill and other system components, 22 modelling tasks have been identified, representing different aspects of the repository evolution. The purpose of this data report is to provide parameter values for the materials included in these tasks. Two codes, Code B right and Abaqus, have been employed for the tasks. The data qualification has focused on the bentonite material for buffer, backfill and the seals for tunnel plugs and bore-holes. All these system components have been treated as if they were based on MX-80 bentonite. The sources of information and documentation of the data qualification for the parameters for MX-80 have been listed. A substantial part of the refinement, especially concerning parameters used for Code B right, is presented in the report. The data qualification has been performed through a motivated and transparent chain; from measurements, via evaluations, to parameter determinations. The measured data was selected to be as recent, traceable and independent as possible. The data sets from this process are thus regarded to be qualified. The conditions for which the data is supplied, the conceptual uncertainties, the spatial and temporal variability and correlations are briefly presented and discussed. A more detailed discussion concerning the data uncertainty due to precision, bias and representativity is presented for measurements of swelling pressure, hydraulic conductivity, shear strength, retention properties and thermal conductivity. The results from the data qualification are presented as a detailed evaluation of measured data. In order to strengthen the relevance of the parameter values and to confirm previously used relations, either newer or independent measurements have been taken into account in the parameter value evaluation. Previously used relations for swelling pressure, hydraulic

  4. SR-Site Data report. THM modelling of buffer, backfill and other system components

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Boergesson, Lennart; Kristensson, Ola (Clay Technology AB, Lund (Sweden))

    2010-03-15

    This report is a supplement to the SR-Site data report. Based on the issues raised in the Process reports concerning THM processes in buffer, backfill and other system components, 22 modelling tasks have been identified, representing different aspects of the repository evolution. The purpose of this data report is to provide parameter values for the materials included in these tasks. Two codes, Code{_}Bright and Abaqus, have been employed for the tasks. The data qualification has focused on the bentonite material for buffer, backfill and the seals for tunnel plugs and bore-holes. All these system components have been treated as if they were based on MX-80 bentonite. The sources of information and documentation of the data qualification for the parameters for MX-80 have been listed. A substantial part of the refinement, especially concerning parameters used for Code{_}Bright, is presented in the report. The data qualification has been performed through a motivated and transparent chain; from measurements, via evaluations, to parameter determinations. The measured data was selected to be as recent, traceable and independent as possible. The data sets from this process are thus regarded to be qualified. The conditions for which the data is supplied, the conceptual uncertainties, the spatial and temporal variability and correlations are briefly presented and discussed. A more detailed discussion concerning the data uncertainty due to precision, bias and representativity is presented for measurements of swelling pressure, hydraulic conductivity, shear strength, retention properties and thermal conductivity. The results from the data qualification are presented as a detailed evaluation of measured data. In order to strengthen the relevance of the parameter values and to confirm previously used relations, either newer or independent measurements have been taken into account in the parameter value evaluation. Previously used relations for swelling pressure, hydraulic

  5. A general mixed boundary model reduction method for component mode synthesis

    NARCIS (Netherlands)

    Voormeeren, S.N.; Van der Valk, P.L.C.; Rixen, D.J.

    2010-01-01

    A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the “Mixed

  6. An Equiratio Mixture Model for non-additive components: a case study for aspartame/acesulfame-K mixtures.

    Science.gov (United States)

    Schifferstein, H N

    1996-02-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alcohols, but is unable to predict intensity for aspartame/sucrose mixtures. In this paper, the sweetness of aspartame/acesulfame-K mixtures in aqueous and acidic solutions is investigated. These two intensive sweeteners probably do not comply with the model's original assumption of sensory dependency among components. However, they reveal how the Equiratio Mixture Model could be modified to describe and predict mixture functions for non-additive substances. To predict equiratio functions for all similar tasting substances, a new Equiratio Mixture Model should yield accurate predictions for components eliciting similar intensities at widely differing concentration levels, and for substances exhibiting hypo- or hyperadditivity. In addition, it should be able to correct violations of Stevens's power law. These three problems are resolved in a model that uses equi-intense units as the measure of physical concentration. An interaction index in the formula for the constant accounts for the degree of interaction between mixture components. Deviations from the power law are corrected by a nonlinear response output transformation, assuming a two-stage model of psychophysical judgment.

  7. Efficient scattering-angle enrichment for a nonlinear inversion of the background and perturbations components of a velocity model

    Science.gov (United States)

    Wu, Zedong; Alkhalifah, Tariq

    2017-09-01

    Reflection-waveform inversion (RWI) can help us reduce the nonlinearity of the standard full-waveform inversion by inverting for the background velocity model using the wave path of a single scattered wavefield to an image. However, current RWI implementations usually neglect the multiscattered energy, which will cause some artefacts in the image and the update of the background. To improve existing RWI implementations in taking multiscattered energy into consideration, we split the velocity model into background and perturbation components, integrate them directly in the wave equation and formulate a new optimization problem for both components. In this case, the perturbed model is no longer a single-scattering model, but includes all scattering. Through introducing a new cheap implementation of scattering angle enrichment, the separation of the background and perturbation components can be implemented efficiently. We optimize both components simultaneously to produce updates to the velocity model that is nonlinear with respect to both the background and the perturbation. The newly introduced perturbation model can absorb the non-smooth update of the background in a more consistent way. We apply the proposed approach on the Marmousi model with data that contain frequencies starting from 5 Hz to show that this method can converge to an accurate velocity starting from a linearly increasing initial velocity. Also, our proposed method works well when applied to a field data set.

  8. Efficient scattering-angle enrichment for a nonlinear inversion of the background and perturbations components of a velocity model

    KAUST Repository

    Wu, Zedong

    2017-07-04

    Reflection-waveform inversion (RWI) can help us reduce the nonlinearity of the standard full-waveform inversion (FWI) by inverting for the background velocity model using the wave-path of a single scattered wavefield to an image. However, current RWI implementations usually neglect the multi-scattered energy, which will cause some artifacts in the image and the update of the background. To improve existing RWI implementations in taking multi-scattered energy into consideration, we split the velocity model into background and perturbation components, integrate them directly in the wave equation, and formulate a new optimization problem for both components. In this case, the perturbed model is no longer a single-scattering model, but includes all scattering. Through introducing a new cheap implementation of scattering angle enrichment, the separation of the background and perturbation components can be implemented efficiently. We optimize both components simultaneously to produce updates to the velocity model that is nonlinear with respect to both the background and the perturbation. The newly introduced perturbation model can absorb the non-smooth update of the background in a more consistent way. We apply the proposed approach on the Marmousi model with data that contain frequencies starting from 5 Hz to show that this method can converge to an accurate velocity starting from a linearly increasing initial velocity. Also, our proposed method works well when applied to a field data set.

  9. Disentangling the associations between parental BMI and offspring body composition using the four‐component model

    Science.gov (United States)

    Grijalva‐Eternod, Carlos; Cortina‐Borja, Mario; Williams, Jane; Fewtrell, Mary; Wells, Jonathan

    2016-01-01

    ABSTRACT Objectives This study sets out to investigate the intergenerational associations between the body mass index (BMI) of parents and the body composition of their offspring. Methods The cross‐sectional data were analyzed for 511 parent–offspring trios from London and south‐east England. The offspring were aged 5–21 years. Parental BMI was obtained by recall and offspring fat mass and lean mass were obtained using the four‐component model. Multivariable regression analysis, with multiple imputation for missing paternal values was used. Sensitivity analyses for levels of non‐paternity were conducted. Results A positive association was seen between parental BMI and offspring BMI, fat mass index (FMI), and lean mass index (LMI). The mother's BMI was positively associated with the BMI, FMI, and LMI z‐scores of both daughters and sons and of a similar magnitude for both sexes. The father's BMI showed similar associations to the mother's BMI, with his son's BMI, FMI, and LMI z‐scores, but no association with his daughter. Sensitivity tests for non‐paternity showed that maternal coefficients remained greater than paternal coefficients throughout but there was no statistical difference at greater levels of non‐paternity. Conclusions We found variable associations between parental BMI and offspring body composition. Associations were generally stronger for maternal than paternal BMI, and paternal associations appeared to differ between sons and daughters. In this cohort, the mother's BMI was statistically significantly associated with her child's body composition but the father's BMI was only associated with the body composition of his sons. Am. J. Hum. Biol. 28:524–533, 2016. © 2016 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc. PMID:26848813

  10. Expression of cellular components in granulomatous inflammatory response in Piaractus mesopotamicus model.

    Directory of Open Access Journals (Sweden)

    Wilson Gómez Manrique

    Full Text Available The present study aimed to describe and characterize the cellular components during the evolution of chronic granulomatous inflammation in the teleost fish pacus (P. mesopotamicus induced by Bacillus Calmette-Guerin (BCG, using S-100, iNOS and cytokeratin antibodies. 50 fish (120±5.0 g were anesthetized and 45 inoculated with 20 μL (40 mg/mL (2.0 x 10(6 CFU/mg and five inoculated with saline (0,65% into muscle tissue in the laterodorsal region. To evaluate the inflammatory process, nine fish inoculated with BCG and one control were sampled in five periods: 3rd, 7th, 14th, 21st and 33rd days post-inoculation (DPI. Immunohistochemical examination showed that the marking with anti-S-100 protein and anti-iNOS antibodies was weak, with a diffuse pattern, between the third and seventh DPI. From the 14th to the 33rd day, the marking became stronger and marked the cytoplasm of the macrophages. Positivity for cytokeratin was initially observed in the 14th DPI, and the stronger immunostaining in the 33rd day, period in which the epithelioid cells were more evident and the granuloma was fully formed. Also after the 14th day, a certain degree of cellular organization was observed, due to the arrangement of the macrophages around the inoculated material, with little evidence of edema. The arrangement of the macrophages around the inoculum, the fibroblasts, the lymphocytes and, in most cases, the presence of melanomacrophages formed the granuloma and kept the inoculum isolated in the 33rd DPI. The present study suggested that the granulomatous experimental model using teleost fish P. mesopotamicus presented a similar response to those observed in mammals, confirming its importance for studies of chronic inflammatory reaction.

  11. Reliability Models Of Aging Passive Components Informed By Materials Degradation Metrics To Support Long-Term Reactor Operations

    International Nuclear Information System (INIS)

    Unwin, Stephen D.; Lowry, Peter P.; Toyooka, Michael Y.

    2012-01-01

    Paper describes a methodology for the synthesis of nuclear power plant service data with expert-elicited materials degradation information to estimate the future failure rates of passive components. This method should be an important resource to long-term plant operations and reactor life extension. Conventional probabilistic risk assessments (PRAs) are not well suited to addressing long-term reactor operations. Since passive structures and components are among those for which replacement can be least practical, they might be expected to contribute increasingly to risk in an aging plant; yet, passives receive limited treatment in PRAs. Furthermore, PRAs produce only snapshots of risk based on the assumption of time-independent component failure rates. This assumption is unlikely to be valid in aging systems. The treatment of aging passive components in PRA presents challenges. Service data to quantify component reliability models are sparse, and this is exacerbated by the greater data demands of age-dependent reliability models. Another factor is that there can be numerous potential degradation mechanisms associated with the materials and operating environment of a given component. This deepens the data problem since risk-informed management of component aging will demand an understanding of the long-term risk significance of individual degradation mechanisms. In this paper we describe a Bayesian methodology that integrates metrics of materials degradation susceptibility with available plant service data to estimate age-dependent passive component reliabilities. Integration of these models into conventional PRA will provide a basis for materials degradation management informed by predicted long-term operational risk.

  12. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  13. Identification and Assessment of Material Models for Age-Related Degradation of Structures and Passive Components in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Choi, In Kil; Kim, Min Kyu; Hofmayer, Charles; Braverman, Joseph; Nie, Jinsuo

    2009-03-01

    This report describes the research effort performed by BNL for the Year 2 scope of work. This research focused on methods that could be used to represent the long-term behavior of materials used at NPPs. To achieve this BNL reviewed time-dependent models which can approximate the degradation effects of the key materials used in the construction of structures and passive components determined to be of interest in the Year 1 effort. The intent was to review the degradation models that would cover the most common time-dependent changes in material properties for concrete and steel components

  14. Physics-Based Stress Corrosion Cracking Component Reliability Model cast in an R7-Compatible Cumulative Damage Framework

    International Nuclear Information System (INIS)

    Unwin, Stephen D.; Lowry, Peter P.; Layton, Robert F.; Toloczko, Mychailo B.; Johnson, Kenneth I.; Sanborn, Scott E.

    2011-01-01

    This is a working report drafted under the Risk-Informed Safety Margin Characterization pathway of the Light Water Reactor Sustainability Program, describing statistical models of passives component reliabilities. The Risk-Informed Safety Margin Characterization (RISMC) pathway is a set of activities defined under the U.S. Department of Energy Light Water Reactor Sustainability Program. The overarching objective of RISMC is to support plant life-extension decision-making by providing a state-of-knowledge characterization of safety margins in key systems, structures, and components (SSCs). The methodology emerging from the RISMC pathway is not a conventional probabilistic risk assessment (PRA)-based one; rather, it relies on a reactor systems simulation framework in which physical conditions of normal reactor operations, as well as accident environments, are explicitly modeled subject to uncertainty characterization. RELAP 7 (R7) is the platform being developed at Idaho National Laboratory to model these physical conditions. Adverse effects of aging systems could be particularly significant in those SSCs for which management options are limited; that is, components for which replacement, refurbishment, or other means of rejuvenation are least practical. These include various passive SSCs, such as piping components. Pacific Northwest National Laboratory is developing passive component reliability models intended to be compatible with the R7 framework. In the R7 paradigm, component reliability must be characterized in the context of the physical environments that R7 predicts. So, while conventional reliability models are parametric, relying on the statistical analysis of service data, RISMC reliability models must be physics-based and driven by the physical boundary conditions that R7 provides, thus allowing full integration of passives into the R7 multi-physics environment. The model must also be cast in a form compatible with the cumulative damage framework that R7

  15. Dynamic thermal modelling and analysis of press-pack IGBTs both at component-level and chip-level

    DEFF Research Database (Denmark)

    Busca, Cristian; Teodorescu, Remus; Blaabjerg, Frede

    2013-01-01

    Thermal models are needed when designing power converters for Wind Turbines (WTs) in order to carry out thermal and reliability assessment of certain designs. Usually the thermal models of Insulated Gate Bipolar Transistors (IGBTs) are given in the datasheet in various forms at component...... chains are clamping pressure dependent. In this paper both component-level and chip-level dynamic thermal models for the PP IGBT under investigation are developed. Both models are developed using geometric parameters and material properties of the device. Using the thermal models, the thermal impedance......-level, not taking into account the thermal distribution among the chips. This is especially relevant in the case of Press-Pack (PP) IGBTs because any non-uniformity of the clamping pressure can affect the chip-level thermal impedances. This happens because the contact thermal resistances in the thermal impedance...

  16. Bayesian Modeling of the Assimilative Capacity Component of Stream Nutrient Export

    Science.gov (United States)

    Implementing stream restoration techniques and best management practices to reduce nonpoint source nutrients implies enhancement of the assimilative capacity for the stream system. In this paper, a Bayesian method for evaluating this component of a TMDL load capacity is developed...

  17. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    Science.gov (United States)

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  18. Models of SEC elution curves for binary and multi-component polymers

    Czech Academy of Sciences Publication Activity Database

    Netopilík, Miloš; Kratochvíl, Pavel

    2009-01-01

    Roč. 58, č. 2 (2009), s. 198-201 ISSN 0959-8103 R&D Projects: GA AV ČR IAA4050403; GA AV ČR IAA400500703 Institutional research plan: CEZ:AV0Z40500505 Keywords : size-exclusion chromatography * multi-component polymers * minority components detection Subject RIV: CD - Macromolecular Chemistry Impact factor: 2.137, year: 2009

  19. Component- and system-level degradation modeling of digital Instrumentation and Control systems based on a Multi-State Physics Modeling Approach

    International Nuclear Information System (INIS)

    Wang, Wei; Di Maio, Francesco; Zio, Enrico

    2016-01-01

    Highlights: • A Multi-State Physics Modeling (MSPM) framework for reliability assessment is proposed. • Monte Carlo (MC) simulation is utilized to estimate the degradation state probability. • Due account is given to stochastic uncertainty and deterministic degradation progression. • The MSPM framework is applied to the reliability assessment of a digital I&C system. • Results are compared with the results obtained with a Markov Chain Model (MCM). - Abstract: A system-level degradation modeling is proposed for the reliability assessment of digital Instrumentation and Control (I&C) systems in Nuclear Power Plants (NPPs). At the component level, we focus on the reliability assessment of a Resistance Temperature Detector (RTD), which is an important digital I&C component used to guarantee the safe operation of NPPs. A Multi-State Physics Model (MSPM) is built to describe this component degradation progression towards failure and Monte Carlo (MC) simulation is used to estimate the probability of sojourn in any of the previously defined degradation states, by accounting for both stochastic and deterministic processes that affect the degradation progression. The MC simulation relies on an integrated modeling of stochastic processes with deterministic aging of components that results to be fundamental for estimating the joint cumulative probability distribution of finding the component in any of the possible degradation states. The results of the application of the proposed degradation model to a digital I&C system of literature are compared with the results obtained by a Markov Chain Model (MCM). The integrated stochastic-deterministic process here proposed to drive the MC simulation is viable to integrate component-level models into a system-level model that would consider inter-system or/and inter-component dependencies and uncertainties.

  20. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector