WorldWideScience

Sample records for model low-altitude component

  1. Current leakage for low altitude satellites: modeling applications

    International Nuclear Information System (INIS)

    Konradi, A.; Mccoy, J.E.; Garriott, O.K.

    1979-01-01

    To simulate the behavior of a high voltage solar cell array in the ionospheric plasma environment, the large (90 ft x 55 ft diameter) vacuum chamber was used to measure the high-voltage plasma interactions of a 3 ft x 30 ft conductive panel. The chamber was filled with nitrogen and argon plasma at electron densities of up to 1,000,000 per cu cm. Measurements of current flow to the plasma were made in three configurations: (a) with one end of the panel grounded, (b) with the whole panel floating while a high bias was applied between the ends of the panel, and (c) with the whole panel at high negative voltage with respect to the chamber walls. The results indicate that a simple model with a constant panel conductivity and plasma resistance can adequately describe the voltage distribution along the panel and the plasma current flow. As expected, when a high potential difference is applied to the panel ends more than 95% of the panel floats negative with respect to the plasma

  2. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter

    Science.gov (United States)

    2009-03-31

    AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields

  3. Math modeling for helicopter simulation of low speed, low altitude and steeply descending flight

    Science.gov (United States)

    Sheridan, P. F.; Robinson, C.; Shaw, J.; White, F.

    1982-01-01

    A math model was formulated to represent some of the aerodynamic effects of low speed, low altitude, and steeply descending flight. The formulation is intended to be consistent with the single rotor real time simulation model at NASA Ames Research Center. The effect of low speed, low altitude flight on main rotor downwash was obtained by assuming a uniform plus first harmonic inflow model and then by using wind tunnel data in the form of hub loads to solve for the inflow coefficients. The result was a set of tables for steady and first harmonic inflow coefficients as functions of ground proximity, angle of attack, and airspeed. The aerodynamics associated with steep descending flight in the vortex ring state were modeled by replacing the steady induced downwash derived from momentum theory with an experimentally derived value and by including a thrust fluctuations effect due to vortex shedding. Tables of the induced downwash and the magnitude of the thrust fluctuations were created as functions of angle of attack and airspeed.

  4. Dynamics modeling and control of a transport aircraft for ultra-low altitude airdrop

    Directory of Open Access Journals (Sweden)

    Liu Ri

    2015-04-01

    Full Text Available The nonlinear aircraft model with heavy cargo moving inside is derived by using the separation body method, which can describe the influence of the moving cargo on the aircraft attitude and altitude accurately. Furthermore, the nonlinear system is decoupled and linearized through the input–output feedback linearization method. On this basis, an iterative quasi-sliding mode (SM flight controller for speed and pitch angle control is proposed. At the first-level SM, a global dynamic switching function is introduced thus eliminating the reaching phase of the sliding motion. At the second-level SM, a nonlinear function with the property of “smaller errors correspond to bigger gains and bigger errors correspond to saturated gains” is designed to form an integral sliding manifold, and the overcompensation of the integral term to big errors is weakened. Lyapunov-based analysis shows that the controller with strong robustness can reject both constant and time-varying model uncertainties. The performance of the proposed control strategy is verified in a maximum load airdrop mission.

  5. Low-Altitude Operation of Unmanned Rotorcraft

    Science.gov (United States)

    Scherer, Sebastian

    Currently deployed unmanned rotorcraft rely on preplanned missions or teleoperation and do not actively incorporate information about obstacles, landing sites, wind, position uncertainty, and other aerial vehicles during online motion planning. Prior work has successfully addressed some tasks such as obstacle avoidance at slow speeds, or landing at known to be good locations. However, to enable autonomous missions in cluttered environments, the vehicle has to react quickly to previously unknown obstacles, respond to changing environmental conditions, and find unknown landing sites. We consider the problem of enabling autonomous operation at low-altitude with contributions to four problems. First we address the problem of fast obstacle avoidance for a small aerial vehicle and present results from over a 1000 rims at speeds up to 10 m/s. Fast response is achieved through a reactive algorithm whose response is learned based on observing a pilot. Second, we show an algorithm to update the obstacle cost expansion for path planning quickly and demonstrate it on a micro aerial vehicle, and an autonomous helicopter avoiding obstacles. Next, we examine the mission of finding a place to land near a ground goal. Good landing sites need to be detected and found and the final touch down goal is unknown. To detect the landing sites we convey a model based algorithm for landing sites that incorporates many helicopter relevant constraints such as landing sites, approach, abort, and ground paths in 3D range data. The landing site evaluation algorithm uses a patch-based coarse evaluation for slope and roughness, and a fine evaluation that fits a 3D model of the helicopter and landing gear to calculate a goodness measure. The data are evaluated in real-time to enable the helicopter to decide on a place to land. We show results from urban, vegetated, and desert environments, and demonstrate the first autonomous helicopter that selects its own landing sites. We present a generalized

  6. Space environment monitoring by low-altitude operational satellites

    International Nuclear Information System (INIS)

    Kroehl, H.W.

    1982-01-01

    The primary task of the Defense Meteorological Satellite Program (DMSP) is the acquisition of meteorological data in the visual and infrared spectral regions. The Air Weather Service operates two satellites in low-altitude, sun-synchronous, polar orbits at 850 km altitude, 98.7 deg inclination, 101.5 minute period and dawn-dusk or noon-midnight equatorial crossing times. Special DMSP sensors of interest to the space science community are the precipitating electron spectrometer, the terrestrial noise receiver, and the topside ionosphere plasma monitor. Data from low-altitude, meteorological satellites can be used to build empirical models of precipitating electron characteristics of the auroral zone and polar cap. The Tiros-NOAA satellite program complements the DMSP program. The orbital elements are the same as DMSP's, except for the times of equatorial crossing, and the tilt of the orbital plane. The Tiros-NOAA program meets the civilian community's needs for meteorological data as the DMSP program does for the military

  7. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  8. Fast calculation of low altitude disturbing gravity for ballistics

    Science.gov (United States)

    Wang, Jianqiang; Wang, Fanghao; Tian, Shasha

    2018-03-01

    Fast calculation of disturbing gravity is a key technology in ballistics while spherical cap harmonic(SCH) theory can be used to solve this problem. By using adjusted spherical cap harmonic(ASCH) methods, the spherical cap coordinates are projected into a global coordinates, then the non-integer associated Legendre functions(ALF) of SCH are replaced by integer ALF of spherical harmonics(SH). This new method is called virtual spherical harmonics(VSH) and some numerical experiment were done to test the effect of VSH. The results of earth's gravity model were set as the theoretical observation, and the model of regional gravity field was constructed by the new method. Simulation results show that the approximated errors are less than 5mGal in the low altitude range of the central region. In addition, numerical experiments were conducted to compare the calculation speed of SH model, SCH model and VSH model, and the results show that the calculation speed of the VSH model is raised one order magnitude in a small scope.

  9. Investigating the auroral electrojets with low altitude polar orbiting satellites

    Directory of Open Access Journals (Sweden)

    T. Moretto

    2002-07-01

    Full Text Available Three geomagnetic satellite missions currently provide high precision magnetic field measurements from low altitude polar orbiting spacecraft. We demonstrate how these data can be used to determine the intensity and location of the horizontal currents that flow in the ionosphere, predominantly in the auroral electrojets. First, we examine the results during a recent geomagnetic storm. The currents derived from two satellites at different altitudes are in very good agreement, which verifies good stability of the method. Further, a very high degree of correlation (correlation coefficients of 0.8–0.9 is observed between the amplitudes of the derived currents and the commonly used auroral electrojet indices based on magnetic measurements at ground. This points to the potential of defining an auroral activity index based on the satellite observations, which could be useful for space weather monitoring. A specific advantage of the satellite observations over the ground-based magnetic measurements is their coverage of the Southern Hemisphere, as well as the Northern. We utilize this in an investigation of the ionospheric currents observed in both polar regions during a period of unusually steady interplanetary magnetic field with a large negative Y-component. A pronounced asymmetry is found between the currents in the two hemispheres, which indicates real inter-hemispheric differences beyond the mirror-asymmetry between hemispheres that earlier studies have revealed. The method is also applied to another event for which the combined measurements of the three satellites provide a comprehensive view of the current systems. The analysis hereof reveals some surprising results concerning the connection between solar wind driver and the resulting ionospheric currents. Specifically, preconditioning of the magnetosphere (history of the interplanetary magnetic field is seen to play an important role, and in the winther hemisphere, it seems to be harder to

  10. Aircraft Survivability: Survivability in The Low Altitude Regime, Summer 2009

    Science.gov (United States)

    2009-01-01

    elevation, sun location, temperature, humidity, ozone level, visibility, cloud coverage, and wind speed and direction. Survivability in the Low Altitude...JASP Summer PMSG 14–16 July 2009 Key West, FL AUG 45th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit 2–5 August 2009 Denver, CO

  11. Some low-altitude cusp dependencies on the interplanetary magnetic field

    International Nuclear Information System (INIS)

    Newell, P.T.; Meng, C.; Sibeck, D.G.; Lepping, R.

    1989-01-01

    Although it has become well established that the low-altitude polar cusp moves equatorward during intervals of southward interplanetary magnetic field (IMF B z y negative (positive) in the northern (southern) hemisphere and postnoon for B y positive (negative) in the northern (southern) hemisphere. The B y induced shift is much more pronounced for southward than for northward B z , a result that appears to be consistent with elementary considerations from, for example, the antiparallel merging model. No interhemispherical latitudinal differences in cusp positions were found that could be attributed to the IMF B x component. As expected, the cusp latitudinal position correlated reasonably well (0.70) with B z when the IMF had a southward component; the previously much less investigated correlation for B z northward proved to be only 0.18, suggestive of a half-wave rectifier effect. The ratio of cusp ion number flux precipitation for B z southward to that for B z northward was 1.75±0.12. The statistical local time (full) width of the cusp proper was found to be 2.1 hours for B z northward and 2.8 hours for B z southward. copyright American Geophysical Union 1989

  12. Method for the visualization of landform by mapping using low altitude UAV application

    Science.gov (United States)

    Sharan Kumar, N.; Ashraf Mohamad Ismail, Mohd; Sukor, Nur Sabahiah Abdul; Cheang, William

    2018-05-01

    Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry are evolving drastically in mapping technology. The significance and necessity for digital landform mapping are developing with years. In this study, a mapping workflow is applied to obtain two different input data sets which are the orthophoto and DSM. A fine flying technology is used to capture Low Altitude Aerial Photography (LAAP). Low altitude UAV (Drone) with the fixed advanced camera was utilized for imagery while computerized photogrammetry handling using Photo Scan was applied for cartographic information accumulation. The data processing through photogrammetry and orthomosaic processes is the main applications. High imagery quality is essential for the effectiveness and nature of normal mapping output such as 3D model, Digital Elevation Model (DEM), Digital Surface Model (DSM) and Ortho Images. The exactitude of Ground Control Points (GCP), flight altitude and the resolution of the camera are essential for good quality DEM and Orthophoto.

  13. Source of the low-altitude hiss in the ionosphere

    Czech Academy of Sciences Publication Activity Database

    Chen, L.; Santolík, Ondřej; Hájoš, Mychajlo; Zheng, L.; Zhima, Z.; Heelis, R.; Hanzelka, Miroslav; Horne, R. B.; Parrot, M.

    2017-01-01

    Roč. 44, č. 5 (2017), s. 2060-2069 ISSN 0094-8276 R&D Projects: GA ČR(CZ) GA17-07027S; GA MŠk(CZ) LH15304 Grant - others:AV ČR(CZ) AP1401 Program:Akademická prémie - Praemium Academiae Institutional support: RVO:68378289 Keywords : ionospheric hiss * low-altitude hiss * plasmaspheric hiss * ray tracing Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 4.253, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/2016GL072181/full

  14. The Study of a Super Low Altitude Satellite

    Science.gov (United States)

    Noda, Atsushi; Homma, Masanori; Utashima, Masayoshi

    This paper reports the result of a study for super low altitude satellite. The altitude of this satellite's orbit is lower than ever. The altitude of a conventional earth observing satellite is generally around from 600km to 900km. The lowest altitude of earth observing satellite launched in Japan was 350km; the Tropical Rainfall Measuring Mission (TRMM). By comparison, the satellite reported in this paper is much lower than that and it is planned to orbit below 200km. Furthermore, the duration of the flight planned is more than two years. Any satellite in the world has not achieved to keep such a low altitude that long term. The satellite in such a low orbit drops quickly because of the strong air drag. Our satellite will cancel the air drag effect by ion engine thrust. To realize this idea, a drag-free system will be applied. This usually leads a complicated and expensive satellite system. We, however, succeeded in finding a robust control law for a simple system even under the unpredictable change of air drag. When the altitude of the satellite is lowered successfully, the spatial resolution of an optical sensor can be highly improved. If a SAR is equipped with the satellite, it enables the drastic reduction of electric power consumption and the fabulous spatial resolution improvement at the same time.

  15. Investigating the auroral electrojets with low altitude polar orbiting satellites

    DEFF Research Database (Denmark)

    Moretto, T.; Olsen, Nils; Ritter, P.

    2002-01-01

    Three geomagnetic satellite missions currently provide high precision magnetic field measurements from low altitude polar orbiting spacecraft. We demonstrate how these data can be used to determine the intensity and location of the horizontal currents that flow in the ionosphere, predominantly...... to another event for which the combined measurements of the three satellites provide a comprehensive view of the current systems. The analysis hereof reveals some surprising results concerning the connection between solar wind driver and the resulting ionospheric currents. Specifically, preconditioning.......8-0.9) is observed between the amplitudes of the derived currents and the commonly used auroral electro-jet indices based on magnetic measurements at ground. This points to the potential of defining an auroral activity index based on the satellite observations, which could be useful for space weather monitoring...

  16. Computer vision techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar

    1990-01-01

    Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.

  17. Low-altitude ion heating with downflowing and upflowing ions

    Science.gov (United States)

    Shen, Y.; Knudsen, D. J.; Burchill, J. K.; Howarth, A. D.; Yau, A. W.; James, G.; Miles, D.; Cogger, L. L.; Perry, G. W.

    2017-12-01

    Mechanisms that energize ions at the initial stage of ion upflow are still not well understood. We statistically investigate ionospheric ion energization and field-aligned motion at very low altitudes (330-730 km) using simultaneous plasma, magnetic field, wave electric field and optical data from the e-POP satellite. The high-time-resolution (10 ms) dataset enables us to study the micro-structures of ion heating and field-aligned ion motion. The ion temperature and field-aligned bulk flow velocity are derived from 2-D ion distribution functions measured by the SEI instrument. From March 2015 to March 2016, we've found 17 orbits (in total 24 ion heating periods) with clear ion heating signatures passing across the dayside cleft or the nightside auroral regions. Most of these events have consistent ion heating and flow velocity characteristics observed from both the SEI and IRM instruments. The perpendicular ion temperature goes up to 4.5 eV within a 2 km-wide region in some cases, in which the Radio Receiver Instrument (RRI) sees broadband extremely low frequency (BBELF) waves, demonstrating significant wave-ion heating down to as low as 350 km. The e-POP Fast Auroral Imager (FAI) and Magnetic Field (MGF) instruments show that many events are associated with active aurora and are within downward current regions. Contrary to what would be expected from mirror-force acceleration of heated ions, the majority of these heating events (17 out of 24) are associated with the core ion downflow rather than upflow. These statistical results provide us with new sights into ion heating and field-aligned flow processes at very low altitudes.

  18. Proton isotropy boundaries as measured on mid- and low-altitude satellites

    Directory of Open Access Journals (Sweden)

    N. Yu. Ganushkina

    2005-07-01

    Full Text Available Polar CAMMICE MICS proton pitch angle distributions with energies of 31-80 keV were analyzed to determine the locations where anisotropic pitch angle distributions (perpendicular flux dominating change to isotropic distributions. We compared the positions of these mid-altitude isotropic distribution boundaries (IDB for different activity conditions with low-altitude isotropic boundaries (IB observed by NOAA 12. Although the obtained statistical properties of IDBs were quite similar to those of IBs, a small difference in latitudes, most pronounced on the nightside and dayside, was found. We selected several events during which simultaneous observations in the same local time sector were available from Polar at mid-altitudes, and NOAA or DMSP at low-altitudes. Magnetic field mapping using the Tsyganenko T01 model with the observed solar wind input parameters showed that the low- and mid-altitude isotropization boundaries were closely located, which leads us to suggest that the Polar IDB and low-altitude IBs are related. Furthermore, we introduced a procedure to control the difference between the observed and model magnetic field to reduce the large scatter in the mapping. We showed that the isotropic distribution boundary (IDB lies in the region where Rc/ρ~6, that is at the boundary of the region where the non-adiabatic pitch angle scattering is strong enough. We therefore conclude that the scattering in the large field line curvature regions in the nightside current sheet is the main mechanism producing isotropization for the main portion of proton population in the tail current sheet. This mechanism controls the observed positions of both IB and IDB boundaries. Thus, this tail region can be probed, in its turn, with observations of these isotropy boundaries. Keywords. Magnetospheric physics (Energetic particles, Precipitating; Magnetospheric configuration and dynamics; Magnetotail

  19. Proton isotropy boundaries as measured on mid- and low-altitude satellites

    Directory of Open Access Journals (Sweden)

    N. Yu. Ganushkina

    2005-07-01

    Full Text Available Polar CAMMICE MICS proton pitch angle distributions with energies of 31-80 keV were analyzed to determine the locations where anisotropic pitch angle distributions (perpendicular flux dominating change to isotropic distributions. We compared the positions of these mid-altitude isotropic distribution boundaries (IDB for different activity conditions with low-altitude isotropic boundaries (IB observed by NOAA 12. Although the obtained statistical properties of IDBs were quite similar to those of IBs, a small difference in latitudes, most pronounced on the nightside and dayside, was found. We selected several events during which simultaneous observations in the same local time sector were available from Polar at mid-altitudes, and NOAA or DMSP at low-altitudes. Magnetic field mapping using the Tsyganenko T01 model with the observed solar wind input parameters showed that the low- and mid-altitude isotropization boundaries were closely located, which leads us to suggest that the Polar IDB and low-altitude IBs are related. Furthermore, we introduced a procedure to control the difference between the observed and model magnetic field to reduce the large scatter in the mapping. We showed that the isotropic distribution boundary (IDB lies in the region where Rc/ρ~6, that is at the boundary of the region where the non-adiabatic pitch angle scattering is strong enough. We therefore conclude that the scattering in the large field line curvature regions in the nightside current sheet is the main mechanism producing isotropization for the main portion of proton population in the tail current sheet. This mechanism controls the observed positions of both IB and IDB boundaries. Thus, this tail region can be probed, in its turn, with observations of these isotropy boundaries.

    Keywords. Magnetospheric physics (Energetic particles, Precipitating; Magnetospheric configuration and dynamics; Magnetotail

  20. Developing a Model Component

    Science.gov (United States)

    Fields, Christina M.

    2013-01-01

    The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.

  1. UAV Low Altitude Photogrammetry for Power Line Inspection

    Directory of Open Access Journals (Sweden)

    Yong Zhang

    2017-01-01

    Full Text Available When the distance between an obstacle and a power line is less than the discharge distance, a discharge arc can be generated, resulting in the interruption of power supplies. Therefore, regular safety inspections are necessary to ensure the safe operation of power grids. Tall vegetation and buildings are the key factors threatening the safe operation of extra high voltage transmission lines within a power line corridor. Manual or laser intensity direction and ranging (LiDAR based inspections are time consuming and expensive. To make safety inspections more efficient and flexible, a low-altitude unmanned aerial vehicle (UAV remote-sensing platform, equipped with an optical digital camera, was used to inspect power line corridors. We propose a semi-patch matching algorithm based on epipolar constraints, using both the correlation coefficient (CC and the shape of its curve to extract three dimensional (3D point clouds for a power line corridor. We use a stereo image pair from inter-strip to improve power line measurement accuracy by transforming the power line direction to an approximately perpendicular to epipolar line. The distance between the power lines and the 3D point cloud is taken as a criterion for locating obstacles within the power line corridor automatically. Experimental results show that our proposed method is a reliable, cost effective, and applicable way for practical power line inspection and can locate obstacles within the power line corridor with accuracy better than ±0.5 m.

  2. Stable low-altitude orbits around Ganymede considering a disturbing body in a circular orbit

    Science.gov (United States)

    Cardoso dos Santos, J.; Carvalho, J. P. S.; Vilhena de Moraes, R.

    2014-10-01

    Some missions are being planned to visit Ganymede like the Europa Jupiter System Mission that is a cooperation between NASA and ESA to insert the spacecraft JGO (Jupiter Ganymede Orbiter) into Ganymedes orbit. This comprehension of the dynamics of these orbits around this planetary satellite is essential for the success of this type of mission. Thus, this work aims to perform a search for low-altitude orbits around Ganymede. An emphasis is given in polar orbits and it can be useful in the planning of space missions to be conducted around, with respect to the stability of orbits of artificial satellites. The study considers orbits of artificial satellites around Ganymede under the influence of the third-body (Jupiter's gravitational attraction) and the polygenic perturbations like those due to non-uniform distribution of mass (J_2 and J_3) of the main body. A simplified dynamic model for these perturbations is used. The Lagrange planetary equations are used to describe the orbital motion of the artificial satellite. The equations of motion are developed in closed form to avoid expansions in eccentricity and inclination. The results show the argument of pericenter circulating. However, low-altitude (100 and 150 km) polar orbits are stable. Another orbital elements behaved variating with small amplitudes. Thus, such orbits are convenient to be applied to future space missions to Ganymede. Acknowledgments: FAPESP (processes n° 2011/05671-5, 2012/12539-9 and 2012/21023-6).

  3. Identification and observations of the plasma mantle at low altitude

    International Nuclear Information System (INIS)

    Newell, P.T.; Meng, Ching-I.; Sanchez, E.R.; Burke, W.J.; Greenspan, M.E.

    1991-01-01

    The direct injection of magnetosheath plasma into the cusp produces at low altitude a precipitation regime with an energy-latitude dispersion-the more poleward portion of which the authors herein term the cusp plume. An extensive survey of the Defense Meteorological Satellite Program (DMSP) F7 and F9 32 eV to 30 keV precipitating particle data shows that similar dispersive signatures exist over much of the dayside, just poleward of the auroral oval. Away from noon (or more precisely, anywhere not immediately poleward of the cusp) the fluxes are reduced by a factor of about 10 as compared to the cusp plume, but other characteristics are quite similar. For example, the inferred temperatures and flow velocities, and the characteristic decline of energy and number flux with increasing latitude is essentially the same in a longitudinally broad ring of precipitation a few degrees thick in latitude over much of the dayside. They conclude that the field lines on which such precipitation occurs thread the magnetospheric plasma mantle over the entire longitudinally extended ring. Besides the location of occurence (i.e., immediately poleward of the dayside oval), the identification is based especially on the associated very soft ion spectra, which have densities from a few times 10 -2 to a few times 10 -1 /cm 3 ; on the temperature range, which is form from a few tens of eV up to about 200 eV; amd on the characteristic gradients with latitude. Further corroborating evidence that the precipitation is associated with field lines which thread the plasma mantle includes drift meter observations which show that regions so identified based on the particle data consistently lie on antisunward convecting field lines. The observations indicate that some dayside high-latitude auroral features just poleward of the auroral oval are embedded in the plasma mantle

  4. Capabilities of unmanned aircraft vehicles for low altitude weed detection

    Science.gov (United States)

    Pflanz, Michael; Nordmeyer, Henning

    2014-05-01

    Sustainable crop production and food security require a consumer and environmental safe plant protection. It is recently known, that precise weed monitoring approaches could help apply pesticides corresponding to field variability. In this regard the site-specific weed management may contribute to an application of herbicides with higher ecologically aware and economical savings. First attempts of precision agriculture date back to the 1980's. Since that time, remote sensing from satellites or manned aircrafts have been investigated and used in agricultural practice, but are currently inadequate for the separation of weeds in an early growth stage from cultivated plants. In contrast, low-cost image capturing at low altitude from unmanned aircraft vehicles (UAV) provides higher spatial resolution and almost real-time processing. Particularly, rotary-wing aircrafts are suitable for precise path or stationary flight. This minimises motion blur and provides better image overlapping for stitching and mapping procedures. Through improved image analyses and the recent increase in the availability of microcontrollers and powerful batteries for UAVs, it can be expected that the spatial mapping of weeds will be enhanced in the future. A six rotors microcopter was equipped with a modified RGB camera taking images from agricultural fields. The hexacopter operates within predefined pathways at adjusted altitudes (from 5 to 10 m) by using GPS navigation. Different scenarios of optical weed detection have been carried out regarding to variable altitude, image resolution, weed and crop growth stages. Our experiences showed high capabilities for site-specific weed control. Image analyses with regard to recognition of weed patches can be used to adapt herbicide application to varying weed occurrence across a field.

  5. Wetland Vegetation Integrity Assessment with Low Altitude Multispectral Uav Imagery

    Science.gov (United States)

    Boon, M. A.; Tesfamichael, S.

    2017-08-01

    The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera position's and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetland's structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified ("D" PES Category) and that the condition is expected to

  6. Correction for reflected sky radiance in low-altitude coastal hyperspectral images.

    Science.gov (United States)

    Kim, Minsu; Park, Joong Yong; Kopilevich, Yuri; Tuell, Grady; Philpot, William

    2013-11-10

    Low-altitude coastal hyperspectral imagery is sensitive to reflections of sky radiance at the water surface. Even in the absence of sun glint, and for a calm water surface, the wide range of viewing angles may result in pronounced, low-frequency variations of the reflected sky radiance across the scan line depending on the solar position. The variation in reflected sky radiance can be obscured by strong high-spatial-frequency sun glint and at high altitude by path radiance. However, at low altitudes, the low-spatial-frequency sky radiance effect is frequently significant and is not removed effectively by the typical corrections for sun glint. The reflected sky radiance from the water surface observed by a low-altitude sensor can be modeled in the first approximation as the sum of multiple-scattered Rayleigh path radiance and the single-scattered direct-solar-beam radiance by the aerosol in the lower atmosphere. The path radiance from zenith to the half field of view (FOV) of a typical airborne spectroradiometer has relatively minimal variation and its reflected radiance to detector array results in a flat base. Therefore the along-track variation is mostly contributed by the forward single-scattered solar-beam radiance. The scattered solar-beam radiances arrive at the water surface with different incident angles. Thus the reflected radiance received at the detector array corresponds to a certain scattering angle, and its variation is most effectively parameterized using the downward scattering angle (DSA) of the solar beam. Computation of the DSA must account for the roll, pitch, and heading of the platform and the viewing geometry of the sensor along with the solar ephemeris. Once the DSA image is calculated, the near-infrared (NIR) radiance from selected water scan lines are compared, and a relationship between DSA and NIR radiance is derived. We then apply the relationship to the entire DSA image to create an NIR reference image. Using the NIR reference image

  7. Easy-to-Use UAV Ground Station Software for Low-Altitude Civil Operations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design and develop easy-to-use Ground Control Station (GCS) software for low-altitude civil Unmanned Aerial Vehicle (UAV) operations. The GCS software...

  8. Lagrangian analysis of low altitude anthropogenic plume processing across the North Atlantic

    Directory of Open Access Journals (Sweden)

    E. Real

    2008-12-01

    Full Text Available The photochemical evolution of an anthropogenic plume from the New-York/Boston region during its transport at low altitudes over the North Atlantic to the European west coast has been studied using a Lagrangian framework. This plume, originally strongly polluted, was sampled by research aircraft just off the North American east coast on 3 successive days, and then 3 days downwind off the west coast of Ireland where another aircraft re-sampled a weakly polluted plume. Changes in trace gas concentrations during transport are reproduced using a photochemical trajectory model including deposition and mixing effects. Chemical and wet deposition processing dominated the evolution of all pollutants in the plume. The mean net photochemical O3 production is estimated to be −5 ppbv/day leading to low O3 by the time the plume reached Europe. Model runs with no wet deposition of HNO3 predicted much lower average net destruction of −1 ppbv/day O3, arising from increased levels of NOx via photolysis of HNO3. This indicates that wet deposition of HNO3 is indirectly responsible for 80% of the net destruction of ozone during plume transport. If the plume had not encountered precipitation, it would have reached Europe with O3 concentrations of up to 80 to 90 ppbv and CO between 120 and 140 ppbv. Photochemical destruction also played a more important role than mixing in the evolution of plume CO due to high levels of O3 and water vapour showing that CO cannot always be used as a tracer for polluted air masses, especially in plumes transported at low altitudes. The results also show that, in this case, an increase in O3/CO slopes can be attributed to photochemical destruction of CO and not to photochemical O3 production as is often assumed.

  9. Component Reification in Systems Modelling

    DEFF Research Database (Denmark)

    Bendisposto, Jens; Hallerstede, Stefan

    When modelling concurrent or distributed systems in Event-B, we often obtain models where the structure of the connected components is specified by constants. Their behaviour is specified by the non-deterministic choice of event parameters for events that operate on shared variables. From a certain......? These components may still refer to shared variables. Events of these components should not refer to the constants specifying the structure. The non-deterministic choice between these components should not be via parameters. We say the components are reified. We need to address how the reified components get...... reflected into the original model. This reflection should indicate the constraints on how to connect the components....

  10. Component Composition Using Feature Models

    DEFF Research Database (Denmark)

    Eichberg, Michael; Klose, Karl; Mitschke, Ralf

    2010-01-01

    interface description languages. If this variability is relevant when selecting a matching component then human interaction is required to decide which components can be bound. We propose to use feature models for making this variability explicit and (re-)enabling automatic component binding. In our...... approach, feature models are one part of service specifications. This enables to declaratively specify which service variant is provided by a component. By referring to a service's variation points, a component that requires a specific service can list the requirements on the desired variant. Using...... these specifications, a component environment can then determine if a binding of the components exists that satisfies all requirements. The prototypical environment Columbus demonstrates the feasibility of the approach....

  11. Field lin topology in the dayside cusp region inferred from low altitude particle observations

    International Nuclear Information System (INIS)

    Soeraas, F.

    1977-12-01

    Dayside low altitude satellite observations of the pitch angle and energy distribution of electrons and protons in the energy range 1 keV to 100 keV during quiet geomagnetic conditions reveal that at times there is a clear latitudinal separation between the precipitating low energy (keV) electrons and protons, with the protons precipitating poleward of the electrons. The high energy (100keV) proton precipitation overlaps both the low energy electron and proton precipitation. These observations are consistent with a model where magnetosheath particles stream in along the cusp field lines and are at the same time convected poleward by an electric field. Electrons with energies of a few keV move fast and give the ''ionospheric footprint'' of the distant cusp. The protons are partly convected poleward of the cusp and into the polar cap. Here the mirroring protons populate the plasma mantle. Equatorward of the cusp the pitch angle distribution of both electrons and protons with energies above a few keV have a pancake shaped distribution indicating closed geomagnetic field lines. The 1 keV electrons penetrate into this region of closed field line structure maintaining an isotropic pitch angle distribution. The intensity is, however, reduced with respect to what it was in the cusp region. It is suggested that these electrons, the lowest measured on the satellite, are associated with the entry layer.(Auth.)

  12. Differences in Hematological Traits between High- and Low-Altitude Lizards (Genus Phrynocephalus.

    Directory of Open Access Journals (Sweden)

    Songsong Lu

    Full Text Available Phrynocephalus erythrurus (Lacertilia: Agamidae is considered to be the highest living reptile in the world (about 4500-5000 m above sea level, whereas Phrynocephalus przewalskii inhabits low altitudes (about 1000-1500 m above sea level. Here, we report the differences in hematological traits between these two different Phrynocephalus species. Compared with P. przewalskii, the results indicated that P. erythrurus own higher oxygen carrying capacity by increasing red blood cell count (RBC, hemoglobin concentration ([Hb] and hematocrit (Hct and these elevations could promote oxygen carrying capacity without disadvantage of high viscosity. The lower partial pressure of oxygen in arterial blood (PaO2 of P. erythrurus did not cause the secondary alkalosis, which may be attributed to an efficient pulmonary system for oxygen (O2 loading. The elevated blood-O2 affinity in P. erythrurus may be achieved by increasing intrinsic O2 affinity of isoHbs and balancing the independent effects of potential heterotropic ligands. We detected one α-globin gene and three β-globin genes with 1 and 33 amino acid substitutions between these two species, respectively. Molecular dynamics simulation results showed that amino acids substitutions in β-globin chains could lead to the elimination of hydrogen bonds in T-state Hb models of P. erythrurus. Based on the present data, we suggest that P. erythrurus have evolved an efficient oxygen transport system under the unremitting hypobaric hypoxia.

  13. Monitoring beach evolution using low-altitude aerial photogrammetry and UAV drones

    Science.gov (United States)

    Rovere, Alessio; Casella, Elisa; Vacchi, Matteo; Mucerino, Luigi; Pedroncini, Andrea; Ferrari, Marco; Firpo, Marco

    2014-05-01

    Beach monitoring is essential in order to understand the mechanisms of evolution of soft coasts, and the rates of erosion. Traditional beach monitoring techniques involve topographic and bathymetric surveys of the beach, and/or aerial photos repeated in time and compared through geographical information systems. A major problem of this kind of approach is the high economic cost. This often leads to increase the time lag between successive monitoring campaigns to reduce survey costs, with the consequence of fragmenting the information available for coastal zone management. MIRAMar is a project funded by Regione Liguria through the PO CRO European Social Fund, and has two main objectives: i) to study and develop an innovative technique, relatively low-cost, to monitor the evolution of the shoreline using low-altitude Unmanned Aerial Vehicle (UAV) photogrammetry; ii) to study the impact of different type of storm events on a vulnerable coastal tract subject to coastal erosion using also the data collected by the UAV instrument. To achieve these aims we use a drone with its hardware and software suit, traditional survey techniques (bathymetric surveys, topographic GPS surveys and GIS techniques) and we implement a numerical modeling chain (coupling hydrodynamic, wave and sand transport modules) in order to study the impact of different type of storm events on a vulnerable coastal tract subject to coastal erosion.

  14. Real-Time Autonomous Obstacle Avoidance for Low-Altitude Fixed-Wing Aircraft

    Science.gov (United States)

    Owlia, Shahboddin

    The GeoSurv II is an Unmanned Aerial Vehicle (UAV) being developed by Carleton University and Sander Geophysics. This thesis is in support of the GeoSurv II project. The objective of the GeoSurv II project is to create a fully autonomous UAV capable of performing geophysical surveys. In order to achieve this level of autonomy, the UAV, which due to the nature of its surveys flies at low altitude, must be able to avoid potential obstacles such as trees, powerlines, telecommunication towers, etc. Developing a method to avoid these obstacles is the objective of this thesis. The literature is rich in methods for trajectory planning and mid-air collision avoidance with other aircraft. In contrast, in this thesis, a method for avoiding static obstacles that are not known a priori is developed. The potential flow theory and panel method are borrowed from fluid mechanics and are employed to generate evasive maneuvers when obstacles are encountered. By means of appropriate modelling of obstacles, the aircraft's constraints are taken into account such that the evasive maneuvers are feasible for the UAV. Moreover, the method is developed with consideration of the limitations of obstacle detection in GeoSurv II. Due to the unavailability of the GeoSurv II aircraft, and the lack of a complete model for GeoSurv II, the method developed is implemented on the non-linear model of the Aerosonde UAV. The Aerosonde model is then subjected to various obstacle scenarios and it is seen that the UAV successfully avoids the obstacles.

  15. Effects of repetitive training at low altitude on erythropoiesis in 400 and 800 m runners.

    Science.gov (United States)

    Frese, F; Friedmann-Bette, B

    2010-06-01

    Classical altitude training can cause an increase in total hemoglobin mass (THM) if a minimum "dose of hypoxia" is reached (altitude >or=2,000 m, >or=3 weeks). We wanted to find out if repetitive exposure to mild hypoxia during living and training at low altitude (training camps at low altitude interspersed by 3 weeks of sea-level training and at the same time points in a control group (CG) of 5 well-trained runners. EPO, sTfR and ferritin were also repeatedly measured during the altitude training camps. Repeated measures ANOVA revealed significant increases in EPO- and sTfR-levels during both training camps and a significant decrease in ferritin indicating enhanced erythropoietic stimulation during living and training at low altitude. Furthermore, significant augmentation of THM by 5.1% occurred in the course of the 2 altitude training camps. In conclusion, repetitive living and training at low altitude leads to a hypoxia-induced increase in erythropoietic stimulation in elite 400 m and 800 m runners and, apparently, might also cause a consecutive augmentation of THM.

  16. 75 FR 6319 - Proposed Amendment of Low Altitude Area Navigation Route T-254; Houston, TX

    Science.gov (United States)

    2010-02-09

    ... Amendment of Low Altitude Area Navigation Route T-254; Houston, TX AGENCY: Federal Aviation Administration... altitude Area Navigation (RNAV) route T-254 in the Houston, TX, terminal area by eliminating the segment... safety and the efficient use of the navigable airspace in the Houston, TX, terminal area. DATES: Comments...

  17. 75 FR 16336 - Establishment of Low Altitude Area Navigation Route (T-284); Houston, TX

    Science.gov (United States)

    2010-04-01

    ... (T-284); Houston, TX AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final rule. SUMMARY: This action establishes a low altitude area navigation (RNAV) route, designated T-284, in the Houston... navigable airspace in the Houston, TX, terminal area. DATES: Effective date 0901 UTC, July 29, 2010. The...

  18. 75 FR 18047 - Amendment of Low Altitude Area Navigation Route T-254; Houston, TX

    Science.gov (United States)

    2010-04-09

    ...-254; Houston, TX AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final rule. SUMMARY: This action amends low altitude Area Navigation (RNAV) route T-254 in the Houston, TX, terminal area by... Houston, TX, terminal area. DATES: Effective Dates: 0901 UTC, June 3, 2010. The Director of the Federal...

  19. Unmanned Aerial Systems Traffic Management (UTM): Safely Enabling UAS Operations in Low-Altitude Airspace

    Science.gov (United States)

    Rios, Joseph

    2016-01-01

    Currently, there is no established infrastructure to enable and safely manage the widespread use of low-altitude airspace and UAS flight operations. Given this, and understanding that the FAA faces a mandate to modernize the present air traffic management system through computer automation and significantly reduce the number of air traffic controllers by FY 2020, the FAA maintains that a comprehensive, yet fully automated UAS traffic management (UTM) system for low-altitude airspace is needed. The concept of UTM is to begin by leveraging concepts from the system of roads, lanes, stop signs, rules and lights that govern vehicles on the ground today. Building on its legacy of work in air traffic management (ATM), NASA is working with industry to develop prototype technologies for a UAS Traffic Management (UTM) system that would evolve airspace integration procedures for enabling safe, efficient low-altitude flight operations that autonomously manage UAS operating in an approved low-altitude airspace environment. UTM is a cloud-based system that will autonomously manage all traffic at low altitudes to include UASs being operated beyond visual line of sight of an operator. UTM would thus enable safe and efficient flight operations by providing fully integrated traffic management services such as airspace design, corridors, dynamic geofencing, severe weather and wind avoidance, congestion management, terrain avoidance, route planning re-routing, separation management, sequencing spacing, and contingency management. UTM removes the need for human operators to continuously monitor aircraft operating in approved areas. NASA envisions concepts for two types of UTM systems. The first would be a small portable system, which could be moved between geographical areas in support of operations such as precision agriculture and public safety. The second would be a Persistent system, which would support low-altitude operations in an approved area by providing continuous automated

  20. Medical Implications of Space Radiation Exposure Due to Low-Altitude Polar Orbits.

    Science.gov (United States)

    Chancellor, Jeffery C; Auñon-Chancellor, Serena M; Charles, John

    2018-01-01

    Space radiation research has progressed rapidly in recent years, but there remain large uncertainties in predicting and extrapolating biological responses to humans. Exposure to cosmic radiation and solar particle events (SPEs) may pose a critical health risk to future spaceflight crews and can have a serious impact on all biomedical aspects of space exploration. The relatively minimal shielding of the cancelled 1960s Manned Orbiting Laboratory (MOL) program's space vehicle and the high inclination polar orbits would have left the crew susceptible to high exposures of cosmic radiation and high dose-rate SPEs that are mostly unpredictable in frequency and intensity. In this study, we have modeled the nominal and off-nominal radiation environment that a MOL-like spacecraft vehicle would be exposed to during a 30-d mission using high performance, multicore computers. Projected doses from a historically large SPE (e.g., the August 1972 solar event) have been analyzed in the context of the MOL orbit profile, providing an opportunity to study its impact to crew health and subsequent contingencies. It is reasonable to presume that future commercial, government, and military spaceflight missions in low-Earth orbit (LEO) will have vehicles with similar shielding and orbital profiles. Studying the impact of cosmic radiation to the mission's operational integrity and the health of MOL crewmembers provides an excellent surrogate and case-study for future commercial and military spaceflight missions.Chancellor JC, Auñon-Chancellor SM, Charles J. Medical implications of space radiation exposure due to low-altitude polar orbits. Aerosp Med Hum Perform. 2018; 89(1):3-8.

  1. Influence of Ionization Degrees on the Evolutions of Charged Particles in Atmospheric Plasma at Low Altitude

    International Nuclear Information System (INIS)

    Pang Xuexia; Deng Zechao; Jia Pengying; Liang Weihua; Li Xia

    2012-01-01

    A zero-dimensional model which includes 56 species of reactants and 427 reactions is used to study the behavior of charged particles in atmospheric plasmas with different ionization degrees at low altitude (near 0 km). The constant coefficient nonlinear equations are solved by using the Quasi-steady-state approximation method. The electron lifetimes are obtained for afterglow plasma with different initial values, and the temporal evolutions of the main charged species are presented, which are dominant in reaction processes. The results show that the electron number density decays quickly. The lifetimes of electrons are shortened by about two orders with increasing ionization degree. Electrons then attach to neutral particles and produce negative ions. When the initial electron densities are in the range of 10 10 ∼ 10 14 cm −3 , the negative ions have sufficiently high densities and long lifetimes for air purification, disinfection and sterilization. Electrons, O 2 − , O 4 − , CO 4 − and CO 3 − are the dominant negative species when the initial electron density n e0 ≤ 10 13 cm −3 , and only electrons and CO 3 − are left when n e0 ≥ 10 15 cm −3 · N + 2 , N + 4 and O + 2 are dominant in the positive charges for any ionization degree. Other positive species, such as O + 4 , N + 3 , NO + , NO + 2 , Ar + 2 and H 3 O + ·H 2 O, are dominant only for a certain ionization degree and in a certain period. (low temperature plasma)

  2. Image Positioning Accuracy Analysis for Super Low Altitude Remote Sensing Satellites

    Directory of Open Access Journals (Sweden)

    Ming Xu

    2012-10-01

    Full Text Available Super low altitude remote sensing satellites maintain lower flight altitudes by means of ion propulsion in order to improve image resolution and positioning accuracy. The use of engineering data in design for achieving image positioning accuracy is discussed in this paper based on the principles of the photogrammetry theory. The exact line-of-sight rebuilding of each detection element and this direction precisely intersecting with the Earth's elliptical when the camera on the satellite is imaging are both ensured by the combined design of key parameters. These parameters include: orbit determination accuracy, attitude determination accuracy, camera exposure time, accurately synchronizing the reception of ephemeris with attitude data, geometric calibration and precise orbit verification. Precise simulation calculations show that image positioning accuracy of super low altitude remote sensing satellites is not obviously improved. The attitude determination error of a satellite still restricts its positioning accuracy.

  3. Unmanned Aircraft Systems Traffic Management (UTM) Safely Enabling UAS Operations in Low-Altitude Airspace

    Science.gov (United States)

    Kopardekar, Parimal H.

    2016-01-01

    Unmanned Aircraft System (UAS) Traffic Management (UTM) Enabling Civilian Low-Altitude Airspace and Unmanned Aircraft System Operations What is the problem? Many beneficial civilian applications of UAS have been proposed, from goods delivery and infrastructure surveillance, to search and rescue, and agricultural monitoring. Currently, there is no established infrastructure to enable and safely manage the widespread use of low-altitude airspace and UAS operations, regardless of the type of UAS. A UAS traffic management (UTM) system for low-altitude airspace may be needed, perhaps leveraging concepts from the system of roads, lanes, stop signs, rules and lights that govern vehicles on the ground today, whether the vehicles are driven by humans or are automated. What system technologies is NASA exploring? Building on its legacy of work in air traffic management for crewed aircraft, NASA is researching prototype technologies for a UAS Traffic Management (UTM) system that could develop airspace integration requirements for enabling safe, efficient low-altitude operations. While incorporating lessons learned from the today's well-established air traffic management system, which was a response that grew out of a mid-air collision over the Grand Canyon in the early days of commercial aviation, the UTM system would enable safe and efficient low-altitude airspace operations by providing services such as airspace design, corridors, dynamic geofencing, severe weather and wind avoidance, congestion management, terrain avoidance, route planning and re-routing, separation management, sequencing and spacing, and contingency management. One of the attributes of the UTM system is that it would not require human operators to monitor every vehicle continuously. The system could provide to human managers the data to make strategic decisions related to initiation, continuation, and termination of airspace operations. This approach would ensure that only authenticated UAS could operate

  4. Geometry of duskside equatorial current during magnetic storm main phase as deduced from magnetospheric and low-altitude observations

    Directory of Open Access Journals (Sweden)

    S. Dubyagin

    2013-03-01

    Full Text Available We present the results of a coordinated study of the moderate magnetic storm on 22 July 2009. The THEMIS and GOES observations of magnetic field in the inner magnetosphere were complemented by energetic particle observations at low altitude by the six NOAA POES satellites. Observations in the vicinity of geosynchronous orbit revealed a relatively thin (half-thickness of less than 1 RE and intense current sheet in the dusk MLT sector during the main phase of the storm. The total westward current (integrated along the z-direction on the duskside at r ~ 6.6 RE was comparable to that in the midnight sector. Such a configuration cannot be adequately described by existing magnetic field models with predefined current systems (error in B > 60 nT. At the same time, low-altitude isotropic boundaries (IB of > 80 keV protons in the dusk sector were shifted ~ 4° equatorward relative to the IBs in the midnight sector. Both the equatorward IB shift and the current strength on the duskside correlate with the Sym-H* index. These findings imply a close relation between the current intensification and equatorward IB shift in the dusk sector. The analysis of IB dispersion revealed that high-energy IBs (E > 100 keV always exhibit normal dispersion (i.e., that for pitch angle scattering on curved field lines. Anomalous dispersion is sometimes observed in the low-energy channels (~ 30–100 keV. The maximum occurrence rate of anomalous dispersion was observed during the main phase of the storm in the dusk sector.

  5. Exotic Meteoritic Phenomena: The Tunguska Event and Anomalous Low Altitude Fireballs - Manifestations of the Mirror World ?

    International Nuclear Information System (INIS)

    Foot, R.; Yoon, T.L.

    2002-01-01

    There are a number of very puzzling meteoritic events including (a) The Tunguska event. It is the only known example of a low altitude atmospheric explosion. It is also the largest recorded event. Remarkably no fragments or significant chemical traces have ever been recovered. (b) Anomalous low altitude fireballs which (in some cases) have been observed to hit the ground. The absence of fragments is particularly striking in these cases, but this is not the only reason they are anomalous. The other main puzzling feature is the lack of a consistent trajectory: low altitude fireballs, if caused by an ordinary cosmic body penetrating the Earth's atmosphere, should have been extremely luminous at high altitudes. But in these anomalous cases this is (remarkably) not observed to occur !. On the other hand, there is strong evidence that most of our galaxy is made from exotic dark material - ''dark matter''. Mirror matter is one well motivated dark matter candidate, since it is dark and stable and it is required to exist if particle interactions are mirror symmetric. If mirror matter is the dark matter, then some amount must exist in our solar system. Although there is not much room for a large amount of mirror matter in the inner solar system, numerous small asteroid sized mirror matter objects are a fascinating possibility because they can potentially collide with the Earth. We demonstrate that the mirror matter theory allows for a simple explanation for the puzzling meteoritic events [both (a) and (b)] if they are due to mirror matter space-bodies. A direct consequence of this explanation is that mirror matter fragments should exist in (or on) the ground at various impact sites. The properties of this potentially recoverable material depend importantly on the sign of the photon-mirror photon kinetic mixing parameter, ε. We argue that the broad characteristics of the anomalous events suggests that ε is probably negative. Strategies for detecting mirror matter in the

  6. Safely Enabling Civilian Unmanned Aerial System (UAS) Operations in Low-Altitude Airspace by Unmanned Aerial System Traffic Management (UTM)

    Science.gov (United States)

    Kopardekar, Parimal Hemchandra

    2015-01-01

    Many UAS will operate at lower altitude (Class G, below 2000 feet). There is an urgent need for a system for civilian low-altitude airspace and UAS operations. Stakeholders want to work with NASA to enable safe operations.

  7. Observations of the Earth's magnetic field from the Space Station: Measurement at high and extremely low altitude using Space Station-controlled free-flyers

    Science.gov (United States)

    Webster, W., Jr.; Frawley, J. J.; Stefanik, M.

    1984-01-01

    Simulation studies established that the main (core), crustal and electrojet components of the Earth's magnetic field can be observed with greater resolution or over a longer time-base than is presently possible by using the capabilities provided by the space station. Two systems are studied. The first, a large lifetime, magnetic monitor would observe the main field and its time variation. The second, a remotely-piloted, magnetic probe would observe the crustal field at low altitude and the electrojet field in situ. The system design and the scientific performance of these systems is assessed. The advantages of the space station are reviewed.

  8. Low altitude observations of the energetic electrons in the outer radiation belt during isolated substorms

    International Nuclear Information System (INIS)

    Varga, L.; Venkatesan, D.; Johns Hopkins Univ., Laurel, MD; Meng, C.I.

    1985-01-01

    The low energy (1-20 keV) detector registering particles onboard the polar-orbiting low altitude (approx. 850 km) DMSP-F2 and -F3 satellites also records high energy electrons penetrating the detector walls. Thus the dynamics of this electron population at L=3.5 can be studied during isolated periods of magnetospheric substorms identified by the indices of auroral electrojet (AE), geomagnetic (Ksub(p)) and ring current (Dsub(st)). Temporal changes in the electron flux during the substorms are observed to be an additional contribution riding over the top of the pre-storm (or geomagnetically quiet-time) electron population; the duration of the interval of intensity variations is observed to be about the same as that of the enhancement of the AE index. This indicates the temporal response of the outer radiation belt to the substorm activity, since the observation was made in the ''horns'' of the outer radiation belt. The observed enhanced radiation at low altitude may associate with the instantaneous increase and/or dumping of the outer radiation belt energetic electrons during each isolated substorm activity. (author)

  9. Normobaric Hypoxia Exposure during Low Altitude Stay and Performance of Elite-Level Race-Walkers

    Directory of Open Access Journals (Sweden)

    Gaurav Sikri, AB Srinivasa

    2016-06-01

    Full Text Available We read with profound interest the article titled ‘Increased hypoxic dose after training at low altitude with 9h per night at 3000m normobaric hypoxia’ by Carr et al. (2015. Authors have concluded that low altitude (1380 m combined with normobaric hypoxia of 3000 m improves total haemoglobin mass (Hbmass and is an effective alternate method for training. Like other studies on elite athletes, the authors of present work have brought out that a major limitation was non-availability of a control group consisting of subjects undertaking same supervised training at normoxia. The total number of ‘possible’ subjects for control group which were taken from a previous study (Saunders et al., 2010 was 11 i.e placebo group (n = 6; 3 male and 3 female and Nocebo group (n = 5; 3 female and 2 male. It seems likely that authors of the present study have chosen only 10 subjects out of those 11. The criteria for exclusion of one subject and selection of 10 out of 11 subjects from the previous study to form the control group of the present study may require further elaboration.

  10. A New Perspective: Assessing the Spatial Distribution of Coral Bleaching with Unmanned Low Altitude Remote Sensing Systems

    Science.gov (United States)

    Levy, J.; Franklin, E. C.; Hunter, C. L.

    2016-12-01

    scientists a new perspective on meso-scale coral reef dynamics. We envision that similar low altitude aerial surveys will be incorporated as a standard component of shallow-water reef studies, especially on reefs too dangerous or remote for in situ surveys.

  11. Unmanned Aerial System (UAS) Traffic Management (UTM): Enabling Low-Altitude Airspace and UAS Operations

    Science.gov (United States)

    Kopardekar, Parimal H.

    2014-01-01

    Many civilian applications of Unmanned Aerial Systems (UAS) have been imagined ranging from remote to congested urban areas, including goods delivery, infrastructure surveillance, agricultural support, and medical services delivery. Further, these UAS will have different equipage and capabilities based on considerations such as affordability, and mission needs applications. Such heterogeneous UAS mix, along with operations such as general aviation, helicopters, gliders must be safely accommodated at lower altitudes. However, key infrastructure to enable and safely manage widespread use of low-altitude airspace and UAS operations therein does not exist. Therefore, NASA is exploring functional design, concept and technology development, and a prototype UAS Traffic Management (UTM) system. UTM will support safe and efficient UAS operations for the delivery of goods and services

  12. High-resolution Ceres Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images

    Science.gov (United States)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2017-06-01

    The Dawn spacecraft Framing Camera (FC) acquired over 31,300 clear filter images of Ceres with a resolution of about 35 m/pxl during the eleven cycles in the Low Altitude Mapping Orbit (LAMO) phase between December 16 2015 and August 8 2016. We ortho-rectified the images from the first four cycles and produced a global, high-resolution, uncontrolled photomosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 62 tiles mapped at a scale of 1:250,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The full atlas is available to the public through the Dawn Geographical Information System (GIS) web page [http://dawngis.dlr.de/atlas] and will become available through the NASA Planetary Data System (PDS) (http://pdssbn.astro.umd.edu/).

  13. Low-Altitude and Slow-Speed Small Target Detection Based on Spectrum Zoom Processing

    Directory of Open Access Journals (Sweden)

    Xuwang Zhang

    2018-01-01

    Full Text Available This paper proposes a spectrum zoom processing based target detection algorithm for detecting the weak echo of low-altitude and slow-speed small (LSS targets in heavy ground clutter environments, which can be used to retrofit the existing radar systems. With the existing range-Doppler frequency images, the proposed method firstly concatenates the data from the same Doppler frequency slot of different images and then applies the spectrum zoom processing. After performing the clutter suppression, the target detection can be finally implemented. Through the theoretical analysis and real data verification, it is shown that the proposed algorithm can obtain a preferable spectrum zoom result and improve the signal-to-clutter ratio (SCR with a very low computational load.

  14. Statistical Correlation of Low-Altitude ENA Emissions with Geomagnetic Activity from IMAGE MENA Observations

    Science.gov (United States)

    Mackler, D. A.; Jahn, J.- M.; Perez, J. D.; Pollock, C. J.; Valek, P. W.

    2016-01-01

    Plasma sheet particles transported Earthward during times of active magnetospheric convection can interact with exospheric/thermospheric neutrals through charge exchange. The resulting Energetic Neutral Atoms (ENAs) are free to leave the influence of the magnetosphere and can be remotely detected. ENAs associated with low-altitude (300-800 km) ion precipitation in the high-latitude atmosphere/ionosphere are termed low-altitude emissions (LAEs). Remotely observed LAEs are highly nonisotropic in velocity space such that the pitch angle distribution at the time of charge exchange is near 90deg. The Geomagnetic Emission Cone of LAEs can be mapped spatially, showing where proton energy is deposited during times of varying geomagnetic activity. In this study we present a statistical look at the correlation between LAE flux (intensity and location) and geomagnetic activity. The LAE data are from the MENA imager on the IMAGE satellite over the declining phase of solar cycle 23 (2000-2005). The SYM-H, AE, and Kp indices are used to describe geomagnetic activity. The goal of the study is to evaluate properties of LAEs in ENA images and determine if those images can be used to infer properties of ion precipitation. Results indicate a general positive correlation to LAE flux for all three indices, with the SYM-H showing the greatest sensitivity. The magnetic local time distribution of LAEs is centered about midnight and spreads with increasing activity. The invariant latitude for all indices has a slightly negative correlation. The combined results indicate LAE behavior similar to that of ion precipitation.

  15. Statistical correlation of low-altitude ENA emissions with geomagnetic activity from IMAGE/MENA observations

    Science.gov (United States)

    Mackler, D. A.; Jahn, J.-M.; Perez, J. D.; Pollock, C. J.; Valek, P. W.

    2016-03-01

    Plasma sheet particles transported Earthward during times of active magnetospheric convection can interact with exospheric/thermospheric neutrals through charge exchange. The resulting Energetic Neutral Atoms (ENAs) are free to leave the influence of the magnetosphere and can be remotely detected. ENAs associated with low-altitude (300-800 km) ion precipitation in the high-latitude atmosphere/ionosphere are termed low-altitude emissions (LAEs). Remotely observed LAEs are highly nonisotropic in velocity space such that the pitch angle distribution at the time of charge exchange is near 90°. The Geomagnetic Emission Cone of LAEs can be mapped spatially, showing where proton energy is deposited during times of varying geomagnetic activity. In this study we present a statistical look at the correlation between LAE flux (intensity and location) and geomagnetic activity. The LAE data are from the MENA imager on the IMAGE satellite over the declining phase of solar cycle 23 (2000-2005). The SYM-H, AE, and Kp indices are used to describe geomagnetic activity. The goal of the study is to evaluate properties of LAEs in ENA images and determine if those images can be used to infer properties of ion precipitation. Results indicate a general positive correlation to LAE flux for all three indices, with the SYM-H showing the greatest sensitivity. The magnetic local time distribution of LAEs is centered about midnight and spreads with increasing activity. The invariant latitude for all indices has a slightly negative correlation. The combined results indicate LAE behavior similar to that of ion precipitation.

  16. Stochastic Modeling Of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2014-01-01

    reliable components are needed for wind turbine. In this paper focus is on reliability of critical components in drivetrain such as bearings and shafts. High failure rates of these components imply a need for more reliable components. To estimate the reliability of these components, stochastic models...... are needed for initial defects and damage accumulation. In this paper, stochastic models are formulated considering some of the failure modes observed in these components. The models are based on theoretical considerations, manufacturing uncertainties, size effects of different scales. It is illustrated how...

  17. Modelling Livestock Component in FSSIM

    NARCIS (Netherlands)

    Thorne, P.J.; Hengsdijk, H.; Janssen, S.J.C.; Louhichi, K.; Keulen, van H.; Thornton, P.K.

    2009-01-01

    This document summarises the development of a ruminant livestock component for the Farm System Simulator (FSSIM). This includes treatments of energy and protein transactions in ruminant livestock that have been used as a basis for the biophysical simulations that will generate the input production

  18. The Origin of Low Altitude ENA Emissions from Storms in 2000-2005 as Observed by IMAGE/MENA

    Science.gov (United States)

    Perez, J. D.; Sheehan, M. M.; Jahn, J.; Mackler, D.; Pollock, C. J.

    2013-12-01

    Low Altitude Emissions (LAEs) are prevalent features of Energetic Neutral Atom (ENA) images of the inner magnetosphere. It is believed that they are created by precipitating ions that reach altitudes near 500 km and then charge exchange with oxygen atoms, subsequently escaping to be observed by satellite borne ENA imagers. In this study, LAEs from the MENA instrument onboard the IMAGE satellite are studied in order to learn about the origin of the precipitating ions. Using the Tsyganenko 05 magnetic field model, the bright pixels capturing the LAEs are mapped to the equator. The LAEs are believed to originate from ions near their mirroring point, i.e., with pitch angles near 90o. Therefore the angle between the line-of-sight and the magnetic field at the point of origin is used to further constrain possible magnetospheric regions that are the origin of the ENAs. By observing the time dependence of the strength and location of the LAEs during geomagnetic storms in the years 2000-2005, the dynamics of the emptying and filling of the loss cone by injected particles is observed. Thus, information regarding the coupling between the inner magnetosphere and the ionosphere is obtained.

  19. Modeling the degradation of nuclear components

    International Nuclear Information System (INIS)

    Stock, D.; Samanta, P.; Vesely, W.

    1993-01-01

    This paper describes component level reliability models that use information on degradation to predict component reliability, and which have been used to evaluate different maintenance and testing policies. The models are based on continuous time Markov processes, and are a generalization of reliability models currently used in Probabilistic Risk Assessment. An explanation of the models, the model parameters, and an example of how these models can be used to evaluate maintenance policies are discussed

  20. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  1. Detection of laurel wilt disease in avocado using low altitude aerial imaging.

    Science.gov (United States)

    de Castro, Ana I; Ehsani, Reza; Ploetz, Randy C; Crane, Jonathan H; Buchanon, Sherrie

    2015-01-01

    Laurel wilt is a lethal disease of plants in the Lauraceae plant family, including avocado (Persea americana). This devastating disease has spread rapidly along the southeastern seaboard of the United States and has begun to affect commercial avocado production in Florida. The main objective of this study was to evaluate the potential to discriminate laurel wilt-affected avocado trees using aerial images taken with a modified camera during helicopter surveys at low-altitude in the commercial avocado production area. The ability to distinguish laurel wilt-affected trees from other factors that produce similar external symptoms was also studied. RmodGB digital values of healthy trees and laurel wilt-affected trees, as well as fruit stress and vines covering trees were used to calculate several vegetation indices (VIs), band ratios, and VI combinations. These indices were subjected to analysis of variance (ANOVA) and an M-statistic was performed in order to quantify the separability of those classes. Significant differences in spectral values among laurel wilt affected and healthy trees were observed in all vegetation indices calculated, although the best results were achieved with Excess Red (ExR), (Red-Green) and Combination 1 (COMB1) in all locations. B/G showed a very good potential for separate the other factors with symptoms similar to laurel wilt-affected trees, such as fruit stress and vines covering trees, from laurel wilt-affected trees. These consistent results prove the usefulness of using a modified camera (RmodGB) to discriminate laurel wilt-affected avocado trees from healthy trees, as well as from other factors that cause the same symptoms and suggest performing the classification in further research. According to our results, ExR and B/G should be utilized to develop an algorithm or decision rules to classify aerial images, since they showed the highest capacity to discriminate laurel wilt-affected trees. This methodology may allow the rapid detection

  2. Detection of laurel wilt disease in avocado using low altitude aerial imaging.

    Directory of Open Access Journals (Sweden)

    Ana I de Castro

    Full Text Available Laurel wilt is a lethal disease of plants in the Lauraceae plant family, including avocado (Persea americana. This devastating disease has spread rapidly along the southeastern seaboard of the United States and has begun to affect commercial avocado production in Florida. The main objective of this study was to evaluate the potential to discriminate laurel wilt-affected avocado trees using aerial images taken with a modified camera during helicopter surveys at low-altitude in the commercial avocado production area. The ability to distinguish laurel wilt-affected trees from other factors that produce similar external symptoms was also studied. RmodGB digital values of healthy trees and laurel wilt-affected trees, as well as fruit stress and vines covering trees were used to calculate several vegetation indices (VIs, band ratios, and VI combinations. These indices were subjected to analysis of variance (ANOVA and an M-statistic was performed in order to quantify the separability of those classes. Significant differences in spectral values among laurel wilt affected and healthy trees were observed in all vegetation indices calculated, although the best results were achieved with Excess Red (ExR, (Red-Green and Combination 1 (COMB1 in all locations. B/G showed a very good potential for separate the other factors with symptoms similar to laurel wilt-affected trees, such as fruit stress and vines covering trees, from laurel wilt-affected trees. These consistent results prove the usefulness of using a modified camera (RmodGB to discriminate laurel wilt-affected avocado trees from healthy trees, as well as from other factors that cause the same symptoms and suggest performing the classification in further research. According to our results, ExR and B/G should be utilized to develop an algorithm or decision rules to classify aerial images, since they showed the highest capacity to discriminate laurel wilt-affected trees. This methodology may allow the

  3. Ecology and silvicultural management for the rehabilitation in rain forests of low altitude on complex metamorphic

    Directory of Open Access Journals (Sweden)

    Gonzalo Cantos Cevallos

    2018-01-01

    Full Text Available In order to characterize ecology and silvicultural management for the rehabilitation of the low altitude rain forest on a metamorphic complex, Quibiján-Naranjal del Toa sector, a floristic inventory was carried out, 36 sample plots of 20 x 25 m in the forest in both sides of Toa's riverside. Tree species with d1,3 e» 5 cm were measured, a total of 1507 individuals represented in 52 species belonging to 49 genera and 24 families were identified and evaluated. Both forests were statistically compared in terms of richness, composition, structure, diversity and abundance, with a high alpha and beta diversity. The species with the highest value index of ecological importance were determined. The families Fabaceae, Moraceae, Lauraceae and Meliaceae are the most representative in terms of species and genera. The most important species are Hibiscus elatus, Calophyllum utile, Carapa guianensis, Buhenavia capitata, y Guarea guara, among others, which stand out as the most abundant. Economic occupation was adequate in a few plots and incomplete in most of the sampling units. Taking into account the results obtained, we propose silvicultural actions aimed at sustainable forest management through the application of improvement shorts and the method of enrichment in dense spaced-groups for the rehabilitation and the achievement of the expected multiethane forest.

  4. Local-time survey of plasma at low altitudes over the auroral zones.

    Science.gov (United States)

    Frank, L. A.; Ackerson, K. L.

    1972-01-01

    Local-time survey of the low-energy proton and electron intensities precipitated into the earth's atmosphere over the auroral zones during periods of magnetic quiescence. This survey was constructed by selecting a typical individual satellite crossing of this region in each of eight local-time sectors from a large library of similar observations with the polar-orbiting satellite Injun 5. The trapping boundary for more-energetic electron intensities, E greater than 45 keV, was found to be a 'natural coordinate' for delineating the boundary between the two major types of lower-energy, 50 less than or equal to E less than or equal to 15,000 eV, electron precipitation commonly observed over the auroral zones at low altitudes. Poleward of this trapping boundary inverted 'V' electron precipitation bands are observed in all local-time sectors. These inverted 'V' electron bands in the evening and midnight sectors are typically more energetic and have greater latitudinal widths than their counterparts in the noon and morning sectors. In general, the main contributors to the electron energy influx into the earth's atmosphere over the auroral zones are the electron inverted 'V' precipitation poleward of the trapping boundary in late evening, the plasma-sheet electron intensities equatorward of this boundary in early morning, and both of these precipitation events near local midnight.

  5. The Gravity Field of Mercury After the Messenger Low-Altitude Campaign

    Science.gov (United States)

    Mazarico, Erwan; Genova, Antonio; Goossens, Sander; Lemoine, Frank G.; Smith, David E.; Zuber, Maria T.; Neumann, Gary A.; Solomon, Sean C.

    2015-01-01

    The final year of the MESSENGER mission was designed to take advantage of the remaining propellant onboard to provide a series of lowaltitude observation campaigns and acquire novel scientific data about the innermost planet. The lower periapsis altitude greatly enhances the sensitivity to the short-wavelength gravity field, but only when the spacecraft is in view of Earth. After more than 3 years in orbit around Mercury, the MESSENGER spacecraft was tracked for the first time below 200-km altitude on 5 May 2014 by the NASA Deep Space Network (DSN). Between August and October, periapsis passages down to 25-km altitude were routinely tracked. These periods considerably improved the quality of the data coverage. Before the end of its mission, MESSENGER will fly at very low altitudes for extended periods of time. Given the orbital geometry, however the periapses will not be visible from Earth and so no new tracking data will be available for altitudes lower than 75 km. Nevertheless, the continuous tracking of MESSENGER in the northern hemisphere will help improve the uniformity of the spatial coverage at altitudes lower than 150 km, which will further improve the overall quality of the Mercury gravity field.

  6. ELF and VLF signatures of sprites registered onboard the low altitude satellite DEMETER

    Directory of Open Access Journals (Sweden)

    J. Błęcki

    2009-06-01

    Full Text Available We report the observation of ELF and VLF signature of sprites recorded on the low altitude satellite DEMETER during thunderstorm activity. At an altitude of ~700 km, waves observed on the E-field spectrograms at mid-to-low latitudes during night time are mainly dominated by up-going 0+ whistlers. During the night of 20 July 2007 two sprites have been observed around 20:10:08 UT from the observatory located on the top of the mountain Śnieżka in Poland (50°44'09" N, 15°44'21" E, 1603 m and, ELF and VLF data have been recorded by the satellite at about 1200 km from the region of thunderstorm activity. During this event, the DEMETER instruments were switched in the burst mode and it was possible to register the wave forms. It is shown that the two sprites have been triggered by two intense +CG lightning strokes (100 kA occurring during the same millisecond but not at the same location. Despite the distance DEMETER has recorded at the same time intense and unusual ELF and VLF emissions. It is shown that the whistler wave propagates from the thunderstorm regions in the Earth-ionosphere guide and enters in the ionosphere below the satellite. They last several tens of milliseconds and the intensity of the ELF waveform is close to 1 mV/m. A particularly intense proton whistler is also associated with these emissions.

  7. Low altitude unmanned aerial vehicle for characterising remediation effectiveness following the FDNPP accident

    International Nuclear Information System (INIS)

    Martin, P.G.; Payton, O.D.; Fardoulis, J.S.; Richards, D.A.; Yamashiki, Y.; Scott, T.B.

    2016-01-01

    On the 12th of March 2011, The Great Tōhoku Earthquake occurred 70 km off the eastern coast of Japan, generating a large 14 m high tsunami. The ensuing catalogue of events over the succeeding 12 d resulted in the release of considerable quantities of radioactive material into the environment. Important to the large-scale remediation of the affected areas is the accurate and high spatial resolution characterisation of contamination, including the verification of decontaminated areas. To enable this, a low altitude unmanned aerial vehicle equipped with a lightweight gamma-spectrometer and height normalisation system was used to produce sub-meter resolution maps of contamination. This system provided a valuable method to examine both contaminated and remediated areas rapidly, whilst greatly reducing the dose received by the operator, typically in localities formerly inaccessible to ground-based survey methods. The characterisation of three sites within Fukushima Prefecture is presented; one remediated (and a site of much previous attention), one un-remediated and a third having been subjected to an alternative method to reduce emitted radiation dose. - Highlights: • Contamination near FDNPP was mapped with a UAV. • Effectiveness of remediation is observed. • Sub-meter resolution mapping is achieved. • Isotopic nature of radiation is determined.

  8. The Advantage by Using Low-Altitude UAV for Sustainable Urban Development Control

    Science.gov (United States)

    Djimantoro, Michael I.; Suhardjanto, Gatot

    2017-12-01

    The City will always grow and develop along with the increasing number of population which affect more demands of building space in the city. Thoserequirements of development can be done by the government, the private sector or by the individual sectors, but it needs to follow the ordinance which is set in the city plan to avoid the adverse negative impact in the future. The problems are if the monitored development in the city is like Jakarta - Indonesia, which have an area of 661 square kilometres compared with the limitation of government employee source. Therefore, it is important to advancing the new tools to monitor the development of the city, due to the large development area and the limitation of source. This research explores the using of Low-altitude UAV (Unmanned Aerial Vehicle) combined with photogrammetry techniques - a new rapidly developing technology - to collect as-built building development information in real time, cost-effective and efficient manner. The result of this research explores the possibility of using the UAV in sustainable urban development control and it can detect the anomalies of the development.

  9. Modelization of cooling system components

    Energy Technology Data Exchange (ETDEWEB)

    Copete, Monica; Ortega, Silvia; Vaquero, Jose Carlos; Cervantes, Eva [Westinghouse Electric (Spain)

    2010-07-01

    In the site evaluation study for licensing a new nuclear power facility, the criteria involved could be grouped in health and safety, environment, socio-economics, engineering and cost-related. These encompass different aspects such as geology, seismology, cooling system requirements, weather conditions, flooding, population, and so on. The selection of the cooling system is function of different parameters as the gross electrical output, energy consumption, available area for cooling system components, environmental conditions, water consumption, and others. Moreover, in recent years, extreme environmental conditions have been experienced and stringent water availability limits have affected water use permits. Therefore, modifications or alternatives of current cooling system designs and operation are required as well as analyses of the different possibilities of cooling systems to optimize energy production taking into account water consumption among other important variables. There are two basic cooling system configurations: - Once-through or Open-cycle; - Recirculating or Closed-cycle. In a once-through cooling system (or open-cycle), water from an external water sources passes through the steam cycle condenser and is then returned to the source at a higher temperature with some level of contaminants. To minimize the thermal impact to the water source, a cooling tower may be added in a once-through system to allow air cooling of the water (with associated losses on site due to evaporation) prior to returning the water to its source. This system has a high thermal efficiency, and its operating and capital costs are very low. So, from an economical point of view, the open-cycle is preferred to closed-cycle system, especially if there are no water limitations or environmental restrictions. In a recirculating system (or closed-cycle), cooling water exits the condenser, goes through a fixed heat sink, and is then returned to the condenser. This configuration

  10. DETERMINING SPECTRAL REFLECTANCE COEFFICIENTS FROM HYPERSPECTRAL IMAGES OBTAINED FROM LOW ALTITUDES

    Directory of Open Access Journals (Sweden)

    P. Walczykowski

    2016-06-01

    Full Text Available Remote Sensing plays very important role in many different study fields, like hydrology, crop management, environmental and ecosystem studies. For all mentioned areas of interest different remote sensing and image processing techniques, such as: image classification (object and pixel- based, object identification, change detection, etc. can be applied. Most of this techniques use spectral reflectance coefficients as the basis for the identification and distinction of different objects and materials, e.g. monitoring of vegetation stress, identification of water pollutants, yield identification, etc. Spectral characteristics are usually acquired using discrete methods such as spectrometric measurements in both laboratory and field conditions. Such measurements however can be very time consuming, which has led many international researchers to investigate the reliability and accuracy of using image-based methods. According to published and ongoing studies, in order to acquire these spectral characteristics from images, it is necessary to have hyperspectral data. The presented article describes a series of experiments conducted using the push-broom Headwall MicroHyperspec A-series VNIR. This hyperspectral scanner allows for registration of images with more than 300 spectral channels with a 1.9 nm spectral bandwidth in the 380- 1000 nm range. The aim of these experiments was to establish a methodology for acquiring spectral reflectance characteristics of different forms of land cover using such sensor. All research work was conducted in controlled conditions from low altitudes. Hyperspectral images obtained with this specific type of sensor requires a unique approach in terms of post-processing, especially radiometric correction. Large amounts of acquired imagery data allowed the authors to establish a new post- processing approach. The developed methodology allowed the authors to obtain spectral reflectance coefficients from a hyperspectral sensor

  11. Propagation of whistler-mode chorus to low altitudes: divergent ray trajectories and ground accessibility

    Directory of Open Access Journals (Sweden)

    J. Chum

    2005-12-01

    Full Text Available We investigate the ray trajectories of nonductedly propagating lower-band chorus waves with respect to their initial angle θ0, between the wave vector and ambient magnetic field. Although we consider a wide range of initial angles θ0, in order to be consistent with recent satellite observations, we pay special attention to the intervals of initial angles θ0, for which the waves propagate along the field lines in the source region, i.e. we mainly focus on waves generated with &theta0 within an interval close to 0° and on waves generated within an interval close to the Gendrin angle. We demonstrate that the ray trajectories of waves generated within an interval close to the Gendrin angle with a wave vector directed towards the lower L-shells (to the Earth significantly diverge at the frequencies typical for the lower-band chorus. Some of these diverging trajectories reach the topside ionosphere having θ close to 0°; thus, a part of the energy may leak to the ground at higher altitudes where the field lines have a nearly vertical direction. The waves generated with different initial angles are reflected. A small variation of the initial wave normal angle thus very dramatically changes the behaviour of the resulting ray. Although our approach is rather theoretical, based on the ray tracing simulation, we show that the initial angle θ0 of the waves reaching the ionosphere (possibly ground is surprisingly close - differs just by several degrees from the initial angles which fits the observation of magnetospherically reflected chorus revealed by CLUSTER satellites. We also mention observations of diverging trajectories on low altitude satellites.

  12. Determining Spectral Reflectance Coefficients from Hyperspectral Images Obtained from Low Altitudes

    Science.gov (United States)

    Walczykowski, P.; Jenerowicz, A.; Orych, A.; Siok, K.

    2016-06-01

    Remote Sensing plays very important role in many different study fields, like hydrology, crop management, environmental and ecosystem studies. For all mentioned areas of interest different remote sensing and image processing techniques, such as: image classification (object and pixel- based), object identification, change detection, etc. can be applied. Most of this techniques use spectral reflectance coefficients as the basis for the identification and distinction of different objects and materials, e.g. monitoring of vegetation stress, identification of water pollutants, yield identification, etc. Spectral characteristics are usually acquired using discrete methods such as spectrometric measurements in both laboratory and field conditions. Such measurements however can be very time consuming, which has led many international researchers to investigate the reliability and accuracy of using image-based methods. According to published and ongoing studies, in order to acquire these spectral characteristics from images, it is necessary to have hyperspectral data. The presented article describes a series of experiments conducted using the push-broom Headwall MicroHyperspec A-series VNIR. This hyperspectral scanner allows for registration of images with more than 300 spectral channels with a 1.9 nm spectral bandwidth in the 380- 1000 nm range. The aim of these experiments was to establish a methodology for acquiring spectral reflectance characteristics of different forms of land cover using such sensor. All research work was conducted in controlled conditions from low altitudes. Hyperspectral images obtained with this specific type of sensor requires a unique approach in terms of post-processing, especially radiometric correction. Large amounts of acquired imagery data allowed the authors to establish a new post- processing approach. The developed methodology allowed the authors to obtain spectral reflectance coefficients from a hyperspectral sensor mounted on an

  13. Tweaking the Four-Component Model

    Science.gov (United States)

    Curzer, Howard J.

    2014-01-01

    By maintaining that moral functioning depends upon four components (sensitivity, judgment, motivation, and character), the Neo-Kohlbergian account of moral functioning allows for uneven moral development within individuals. However, I argue that the four-component model does not go far enough. I offer a more accurate account of moral functioning…

  14. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  15. Overview of the model component in ECOCLIM

    DEFF Research Database (Denmark)

    Geels, Camilla; Boegh, Eva; Bendtsen, J

    and atmospheric models. We will use the model system to 1) quantify the potential effects of climate change on ecosystem exchange of GHG and 2) estimate the impacts of changes in management practices including land use change and nitrogen (N) loads. Here the various model components will be introduced...

  16. DOA Estimation of Low Altitude Target Based on Adaptive Step Glowworm Swarm Optimization-multiple Signal Classification Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Hao

    2015-06-01

    Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.

  17. Mercury's Hollows: New Information on Distribution and Morphology from MESSENGER Observations at Low Altitude

    Science.gov (United States)

    Blewett, D. T.; Stadermann, A. C.; Chabot, N. L.; Denevi, B. W.; Ernst, C. M.; Peplowski, P. N.

    2014-12-01

    MESSENGER's orbital mission at Mercury led to the discovery of an unusual landform not known from other airless rocky bodies of the Solar System. Hollows are irregularly shaped, shallow, rimless depressions, often occurring in clusters and with high-reflectance interiors and halos. The fresh appearance of hollows suggests that they are relatively young features. For example, hollows are uncratered, and talus aprons downslope of hollows in certain cases appear to be covering small impact craters (100-200 in diameter). Hence, some hollows may be actively forming at present. The characteristics of hollows are suggestive of formation via destruction of a volatile-bearing phase (possibly one or more sulfides) through solar heating, micrometeoroid bombardment, and/or ion impact. Previous analysis showed that hollows are associated with low-reflectance material (LRM), a color unit identified from global color images. The material hosting hollows has often been excavated from depth by basin or crater impacts. Hollows are small features (tens of meters to several kilometers), so their detection and characterization with MESSENGER's global maps have been limited. MESSENGER's low-altitude orbits provide opportunities for collection of images at high spatial resolutions, which reveal new occurrences of hollows and offer views of hollows with unprecedented detail. As of this writing, we have examined more than 21,000 images with pixel sizes Shadow-length measurements were made on 280 images, yielding the depths of 1343 individual hollows. The mean depth is 30 m, with a standard deviation of 17 m. We also explored correlations between the geographic locations of hollows and maps provided by the MESSENGER geochemical sensors (X-Ray, Gamma-Ray, and Neutron Spectrometers), including the abundances of Al/Si, Ca/Si, Fe/Si, K, Mg/Si, and S/Si, as well as total neutron cross-section. No clear compositional trends emerged; it is likely that any true compositional preference for terrain

  18. Impacts of more frequent droughts on a relict low-altitude Pinus uncinata stand in the French Alps

    Directory of Open Access Journals (Sweden)

    Christophe eCorona

    2015-01-01

    Full Text Available Cold microclimatic conditions provide exceptional microhabitats to Pinus uncinata stands occurring at abnormally low altitudes in seven paleorefugia of the northern French Alps. Here, P. uncinata is located at the lower bounds of its ecological limits and therefore expected to provide a sensitive indicator of climate change processes. We used dendrochronological analysis to study the growth patterns of closely spaced chronologies across an elevational transect and compare a relict low-altitude to a P. uncinata stand located at the alpine treeline. Two detrending procedures are used to reveal high and low-frequency wavelengths embedded in annually resolved ring-width series. Growth response of P. uncinata to instrumental temperature and precipitation data is investigated by means of moving response function analyses. Results show an increase in the sensitivity of tree-ring widths to drought during previous summer in both stands. At the treeline stand, an increasing correlation with fall temperature is observed whereby low-frequency variability of fall temperature and radial tree growth increased in two synchronous steps around ~1930 and from ~1980–present. At the low-altitude stand, P. uncinata appears more drought sensitive and exhibits a sharp growth decline since the mid-1980s, coinciding with increasing summer temperatures. Growth divergence between the two stands can be observed since the mid-1980s. We argue that the positive growth trend at the high-altitude stand is due to increasing fall temperatures which would favor the formation of metabolic reserves in conjunction with atmospheric CO2 enrichment that in turn would facilitate improved water use efficiency. At the relict low-altitude stand, in contrast, it seems that improved water use efficiency cannot compensate for the increase in summer temperatures.

  19. Unmanned Aircraft System (UAS) Traffic Management (UTM): Enabling Civilian Low-Altitude Airspace and Unmanned Aerial System Operations

    Science.gov (United States)

    Kopardekar, Parimal Hemchandra

    2016-01-01

    Just a year ago we laid out the UTM challenges and NASA's proposed solutions. During the past year NASA's goal continues to be to conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line-of-sight UAS operations in the low-altitude airspace. Significant progress has been made, and NASA is continuing to move forward.

  20. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing.

    Science.gov (United States)

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-09-30

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA

  1. Breeding for Increased Water Use Efficiency in Corn (Maize) Using a Low-altitude Unmanned Aircraft System

    Science.gov (United States)

    Shi, Y.; Veeranampalayam-Sivakumar, A. N.; Li, J.; Ge, Y.; Schnable, J. C.; Rodriguez, O.; Liang, Z.; Miao, C.

    2017-12-01

    Low-altitude aerial imagery collected by unmanned aircraft systems (UAS) at centimeter-level spatial resolution provides great potential to collect high throughput plant phenotyping (HTP) data and accelerate plant breeding. This study is focused on UAS-based HTP for breeding increased water use efficiency in corn in eastern Nebraska. The field trail is part of an effort by the Genomes to Fields consortium effort to grow and phenotype many of the same corn (maize) hybrids at approximately 40 locations across the United States and Canada in order to stimulate new research in crop modeling, the development of new plant phenotyping technologies and the identification of genetic loci that control the adaptation of specific corn (maize) lines to specific environments. It included approximately 250 maize hybrids primary generated using recently off patent material from major seed companies. These lines are the closest material to what farmers are growing today which can be legally used for research purposes and genotyped by the public sector. During the growing season, a hexacopter equipped with a multispectral and a RGB cameras was flown and used to image this 1-hectare field trial near Mead, NE. Sensor data from the UAS were correlated directly with grain yield, measured at the end of the growing season, and were also be used to quantify other traits of interest to breeders including flowering date, plant height, leaf orientation, canopy spectral, and stand count. The existing challenges of field data acquisition (to ensure data quality) and development of effective image processing algorithms (such as detecting corn tassels) will be discussed. The success of this study and others like it will speed up the process of phenotypic data collection, and provide more accurate and detailed trait data for plant biologists, plant breeders, and other agricultural scientists. Employing advanced UAS-based machine vision technologies in agricultural applications have the potential

  2. Probabilistic Modeling of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei

    Wind energy is one of several energy sources in the world and a rapidly growing industry in the energy sector. When placed in offshore or onshore locations, wind turbines are exposed to wave excitations, highly dynamic wind loads and/or the wakes from other wind turbines. Therefore, most components...... in a wind turbine experience highly dynamic and time-varying loads. These components may fail due to wear or fatigue, and this can lead to unplanned shutdown repairs that are very costly. The design by deterministic methods using safety factors is generally unable to account for the many uncertainties. Thus......, a reliability assessment should be based on probabilistic methods where stochastic modeling of failures is performed. This thesis focuses on probabilistic models and the stochastic modeling of the fatigue life of the wind turbine drivetrain. Hence, two approaches are considered for stochastic modeling...

  3. Modeling accelerator structures and RF components

    International Nuclear Information System (INIS)

    Ko, K., Ng, C.K.; Herrmannsfeldt, W.B.

    1993-03-01

    Computer modeling has become an integral part of the design and analysis of accelerator structures RF components. Sophisticated 3D codes, powerful workstations and timely theory support all contributed to this development. We will describe our modeling experience with these resources and discuss their impact on ongoing work at SLAC. Specific examples from R ampersand D on a future linear collide and a proposed e + e - storage ring will be included

  4. A principal components model of soundscape perception.

    Science.gov (United States)

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  5. Increased Hypoxic Dose After Training at Low Altitude with 9h Per Night at 3000m Normobaric Hypoxia.

    Science.gov (United States)

    Carr, Amelia J; Saunders, Philo U; Vallance, Brent S; Garvican-Lewis, Laura A; Gore, Christopher J

    2015-12-01

    This study examined effects of low altitude training and a live-high: train-low protocol (combining both natural and simulated modalities) on haemoglobin mass (Hbmass), maximum oxygen consumption (VO2max), time to exhaustion, and submaximal exercise measures. Eighteen elite-level race-walkers were assigned to one of two experimental groups; lowHH (low Hypobaric Hypoxia: continuous exposure to 1380 m for 21 consecutive days; n = 10) or a combined low altitude training and nightly Normobaric Hypoxia (lowHH+NHnight: living and training at 1380 m, plus 9 h.night(-1) at a simulated altitude of 3000 m using hypoxic tents; n = 8). A control group (CON; n = 10) lived and trained at 600 m. Measurement of Hbmass, time to exhaustion and VO2max was performed before and after the training intervention. Paired samples t-tests were used to assess absolute and percentage change pre and post-test differences within groups, and differences between groups were assessed using a one-way ANOVA with least significant difference post-hoc testing. Statistical significance was tested at p altitude (1380 m) combined with sleeping in altitude tents (3000 m) as one effective alternative to traditional altitude training methods, which can improve Hbmass. Key pointsIn some countries, it may not be possible to perform classical altitude training effectively, due to the low elevation at altitude training venues. An additional hypoxic stimulus can be provided by simulating higher altitudes overnight, using altitude tents.Three weeks of combined (living and training at 1380 m) and simulated altitude exposure (at 3000 m) can improve haemoglobin mass by over 3% in comparison to control values, and can also improve time to exhaustion by ~9% in comparison to baseline.We recommend that, in the context of an altitude training camp at low altitudes (~1400 m) the addition of a relatively short exposure to simulated altitudes of 3000 m can elicit physiological and performance benefits, without compromise to

  6. Thermochemical modelling of multi-component systems

    International Nuclear Information System (INIS)

    Sundman, B.; Gueneau, C.

    2015-01-01

    Computational thermodynamic, also known as the Calphad method, is a standard tool in industry for the development of materials and improving processes and there is an intense scientific development of new models and databases. The calculations are based on thermodynamic models of the Gibbs energy for each phase as a function of temperature, pressure and constitution. Model parameters are stored in databases that are developed in an international scientific collaboration. In this way, consistent and reliable data for many properties like heat capacity, chemical potentials, solubilities etc. can be obtained for multi-component systems. A brief introduction to this technique is given here and references to more extensive documentation are provided. (authors)

  7. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  8. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  9. Pool scrubbing models for iodine components

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, K [Battelle Ingenieurtechnik GmbH, Eschborn (Germany)

    1996-12-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs.

  10. Pool scrubbing models for iodine components

    International Nuclear Information System (INIS)

    Fischer, K.

    1996-01-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs

  11. Computational needs for modelling accelerator components

    International Nuclear Information System (INIS)

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs

  12. Increased Hypoxic Dose After Training at Low Altitude with 9h Per Night at 3000m Normobaric Hypoxia

    Directory of Open Access Journals (Sweden)

    Amelia J. Carr, Philo U. Saunders, Brent S. Vallance, Laura A. Garvican-Lewis, Christopher J. Gore

    2015-12-01

    Full Text Available This study examined effects of low altitude training and a live-high: train-low protocol (combining both natural and simulated modalities on haemoglobin mass (Hbmass, maximum oxygen consumption (VO2max, time to exhaustion, and submaximal exercise measures. Eighteen elite-level race-walkers were assigned to one of two experimental groups; lowHH (low Hypobaric Hypoxia: continuous exposure to 1380 m for 21 consecutive days; n = 10 or a combined low altitude training and nightly Normobaric Hypoxia (lowHH+NHnight: living and training at 1380 m, plus 9 h.night-1 at a simulated altitude of 3000 m using hypoxic tents; n = 8. A control group (CON; n = 10 lived and trained at 600 m. Measurement of Hbmass, time to exhaustion and VO2max was performed before and after the training intervention. Paired samples t-tests were used to assess absolute and percentage change pre and post-test differences within groups, and differences between groups were assessed using a one-way ANOVA with least significant difference post-hoc testing. Statistical significance was tested at p < 0.05. There was a 3.7% increase in Hbmass in lowHH+NHnight compared with CON (p = 0.02. In comparison to baseline, Hbmass increased by 1.2% (±1.4% in the lowHH group, 2.6% (±1.8% in lowHH+NHnight, and there was a decrease of 0.9% (±4.9% in CON. VO2max increased by ~4% within both experimental conditions but was not significantly greater than the 1% increase in CON. There was a ~9% difference in pre and post-intervention values in time to exhaustion after lowHH+NH-night (p = 0.03 and a ~8% pre to post-intervention difference (p = 0.006 after lowHH only. We recommend low altitude (1380 m combined with sleeping in altitude tents (3000 m as one effective alternative to traditional altitude training methods, which can improve Hbmass.

  13. Food composition of some low altitude Lissotriton montandoni (Amphibia, Caudata populations from North-Western Romania

    Directory of Open Access Journals (Sweden)

    Covaciu-Marcov S.D.

    2010-01-01

    Full Text Available The diet of some populations of Lissotriton montandoni from north-western Romania is composed of prey belonging to 20 categories. The food components of the Carpathian newts are similar to those of other species of newts. Most of the prey are aquatic animals, but terrestrial prey also has a high percentage abundance. The consumed prey categories are common in the newts' habitats as well, but in natural ponds the prey item with the highest abundance in the diet is not the most frequent one in the habitat. Thus, although the Carpathian newts are basically opportunistic predators, they still display a certain trophic selectivity.

  14. The usefulness of low-altitude aerial photography for the assessment of channel morphodynamics of a lowland river

    Directory of Open Access Journals (Sweden)

    Ostrowski Piotr

    2017-06-01

    Full Text Available The paper presents examples of using low-altitude aerial images of a modern river channel, acquired from an ultralight aircraft. The images have been taken for two sections of the Vistula river: in the Małopolska Gorge and near Dęblin and Gołąb. Alongside with research flights, there were also terrestrial investigations, such as echo sounding of the riverbed and geological mapping, carried out in the river channel zone. A comparison of the results of aerial and terrestrial research revealed high clarity of the images, allowing for precise identification of the evidence that indicates the specific course of river channel processes. Aerial images taken from ultralight aircrafts can significantly increase the accuracy of geological surveys of river channel zones in the Polish Lowlands due to low logistic requirements.

  15. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  16. On the variability of I(7620 Å/I(5577 Å in low altitude aurora

    Directory of Open Access Journals (Sweden)

    E. J. Llewellyn

    1999-07-01

    Full Text Available An auroral electron excitation model, combined with simple equilibrium neutral and ion chemistry models, is used to investigate the optical emission processes and height profiles of I(5577 Å and I(7620 Å in the 90 to 100 km altitude region. It is shown that the apparent discrepancies between ground-based and rocket-borne auroral observations of the I(7620 Å/I(5577 Å ratio are due to the extreme height variation of this intensity ratio in the 90 to 100 km region.Key words. Atmospheric composition and structure (airglow and aurora

  17. On the variability of I(7620 Å/I(5577 Å in low altitude aurora

    Directory of Open Access Journals (Sweden)

    E. J. Llewellyn

    Full Text Available An auroral electron excitation model, combined with simple equilibrium neutral and ion chemistry models, is used to investigate the optical emission processes and height profiles of I(5577 Å and I(7620 Å in the 90 to 100 km altitude region. It is shown that the apparent discrepancies between ground-based and rocket-borne auroral observations of the I(7620 Å/I(5577 Å ratio are due to the extreme height variation of this intensity ratio in the 90 to 100 km region.

    Key words. Atmospheric composition and structure (airglow and aurora

  18. Energy Expenditures of Caribou Responding to Low-Altitude Jet Aircraft.

    Science.gov (United States)

    1993-09-01

    collared female caribou of the Delta Herd , 5 controls and 5 treatments (i.e., overflown), carried animal noise monitors and were overflown in April... Porcupine Caribou Model (CARIBOU), predicted that, for the sound exposures of the field study, changes in energy expenditure, forage intake, energy balance...and consequent pregnancy rate were small. Although we project no significant decrease in fecundity and thus, herd productivity in response to the

  19. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  20. Radioisotope Stirling Engine Powered Airship for Low Altitude Operation on Venus

    Science.gov (United States)

    Colozza, Anthony J.

    2012-01-01

    The feasibility of a Stirling engine powered airship for the near surface exploration of Venus was evaluated. The heat source for the Stirling engine was limited to 10 general purpose heat source (GPHS) blocks. The baseline airship utilized hydrogen as the lifting gas and the electronics and payload were enclosed in a cooled insulated pressure vessel to maintain the internal temperature at 320 K and 1 Bar pressure. The propulsion system consisted of an electric motor driving a propeller. An analysis was set up to size the airship that could operate near the Venus surface based on the available thermal power. The atmospheric conditions on Venus were modeled and used in the analysis. The analysis was an iterative process between sizing the airship to carry a specified payload and the power required to operate the electronics, payload and cooling system as well as provide power to the propulsion system to overcome the drag on the airship. A baseline configuration was determined that could meet the power requirements and operate near the Venus surface. From this baseline design additional trades were made to see how other factors affected the design such as the internal temperature of the payload chamber and the flight altitude. In addition other lifting methods were evaluated such as an evacuated chamber, heated atmospheric gas and augmented heated lifting gas. However none of these methods proved viable.

  1. LOW-ALTITUDE RECONNECTION INFLOW-OUTFLOW OBSERVATIONS DURING A 2010 NOVEMBER 3 SOLAR ERUPTION

    Energy Technology Data Exchange (ETDEWEB)

    Savage, Sabrina L.; Holman, Gordon; Su, Yang [NASA/Goddard Space Flight Center, Oak Ridge Associated Universities, 8800 Greenbelt Road, Code 671, Greenbelt, MD 20771 (United States); Reeves, Katharine K. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street MS 58, Cambridge, MA 02138 (United States); Seaton, Daniel B. [Royal Observatory of Belgium-SIDC, Avenue Circulaire 3, B-1180 Brussels (Belgium); McKenzie, David E. [Department of Physics, Montana State University, P.O. Box 173840, Bozeman, MT 59717-3840 (United States)

    2012-07-20

    For a solar flare occurring on 2010 November 3, we present observations using several SDO/AIA extreme-ultraviolet (EUV) passbands of an erupting flux rope followed by inflows sweeping into a current sheet region. The inflows are soon followed by outflows appearing to originate from near the termination point of the inflowing motion-an observation in line with standard magnetic reconnection models. We measure average inflow plane-of-sky speeds to range from {approx}150 to 690 km s{sup -1} with the initial, high-temperature inflows being the fastest. Using the inflow speeds and a range of Alfven speeds, we estimate the Alfvenic Mach number which appears to decrease with time. We also provide inflow and outflow times with respect to RHESSI count rates and find that the fast, high-temperature inflows occur simultaneously with a peak in the RHESSI thermal light curve. Five candidate inflow-outflow pairs are identified with no more than a minute delay between detections. The inflow speeds of these pairs are measured to be {approx}10{sup 2} km s{sup -1} with outflow speeds ranging from {approx}10{sup 2} to 10{sup 3} km s{sup -1}-indicating acceleration during the reconnection process. The fastest of these outflows are in the form of apparently traveling density enhancements along the legs of the loops rather than the loop apexes themselves. These flows could possibly either be accelerated plasma, shocks, or waves prompted by reconnection. The measurements presented here show an order of magnitude difference between the retraction speeds of the loops and the speed of the density enhancements within the loops-presumably exiting the reconnection site.

  2. LOW-ALTITUDE RECONNECTION INFLOW-OUTFLOW OBSERVATIONS DURING A 2010 NOVEMBER 3 SOLAR ERUPTION

    International Nuclear Information System (INIS)

    Savage, Sabrina L.; Holman, Gordon; Su, Yang; Reeves, Katharine K.; Seaton, Daniel B.; McKenzie, David E.

    2012-01-01

    For a solar flare occurring on 2010 November 3, we present observations using several SDO/AIA extreme-ultraviolet (EUV) passbands of an erupting flux rope followed by inflows sweeping into a current sheet region. The inflows are soon followed by outflows appearing to originate from near the termination point of the inflowing motion—an observation in line with standard magnetic reconnection models. We measure average inflow plane-of-sky speeds to range from ∼150 to 690 km s –1 with the initial, high-temperature inflows being the fastest. Using the inflow speeds and a range of Alfvén speeds, we estimate the Alfvénic Mach number which appears to decrease with time. We also provide inflow and outflow times with respect to RHESSI count rates and find that the fast, high-temperature inflows occur simultaneously with a peak in the RHESSI thermal light curve. Five candidate inflow-outflow pairs are identified with no more than a minute delay between detections. The inflow speeds of these pairs are measured to be ∼10 2 km s –1 with outflow speeds ranging from ∼10 2 to 10 3 km s –1 —indicating acceleration during the reconnection process. The fastest of these outflows are in the form of apparently traveling density enhancements along the legs of the loops rather than the loop apexes themselves. These flows could possibly either be accelerated plasma, shocks, or waves prompted by reconnection. The measurements presented here show an order of magnitude difference between the retraction speeds of the loops and the speed of the density enhancements within the loops—presumably exiting the reconnection site.

  3. Atmospheric Structure and Diurnal Variations at Low Altitudes in the Martian Tropics

    Science.gov (United States)

    Hinson, David P.; Spiga, A.; Lewis, S.; Tellmann, S.; Pätzold, M.; Asmar, S.; Häusler, B.

    2013-10-01

    We are using radio occultation measurements from Mars Express, Mars Reconnaissance Orbiter, and Mars Global Surveyor to characterize the diurnal cycle in the lowest scale height above the surface. We focus on northern spring and summer, using observations from 4 Martian years at local times of 4-5 and 15-17 h. We supplement the observations with results obtained from large-eddy simulations and through data assimilation by the UK spectral version of the LMD Mars Global Circulation Model. We previously investigated the depth of the daytime convective boundary layer (CBL) and its variations with surface elevation and surface properties. We are now examining unusual aspects of the temperature structure observed at night. Most important, predawn profiles in the Tharsis region contain an unexpected layer of neutral static stability at pressures of 200-300 Pa with a depth of 4-5 km. The mixed layer is bounded above by a midlevel temperature inversion and below by another strong inversion adjacent to the surface. The narrow temperature minimum at the base of the midlevel inversion suggests the presence of a water ice cloud layer, with the further implication that radiative cooling at cloud level can induce convective activity at lower altitudes. Conversely, nighttime profiles in Amazonis show no sign of a midlevel inversion or a detached mixed layer. These regional variations in the nighttime temperature structure appear to arise in part from large-scale variations in topography, which have several notable effects. First, the CBL is much deeper in the Tharsis region than in Amazonis, owing to a roughly 6-km difference in surface elevation. Second, large-eddy simulations show that daytime convection is not only deeper above Tharsis but also considerably more intense than it is in Amazonis. Finally, the daytime surface temperatures are comparable in the two regions, so that Tharsis acts as an elevated heat source throughout the CBL. These topographic effects are expected to

  4. Applications of Low Altitude Remote Sensing in Agriculture upon Farmers' Requests– A Case Study in Northeastern Ontario, Canada

    Science.gov (United States)

    Zhang, Chunhua; Walters, Dan; Kovacs, John M.

    2014-01-01

    With the growth of the low altitude remote sensing (LARS) industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV). Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind) on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams. PMID:25386696

  5. Applications of low altitude remote sensing in agriculture upon farmers' requests--a case study in northeastern Ontario, Canada.

    Science.gov (United States)

    Zhang, Chunhua; Walters, Dan; Kovacs, John M

    2014-01-01

    With the growth of the low altitude remote sensing (LARS) industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV). Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind) on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams.

  6. Applications of low altitude remote sensing in agriculture upon farmers' requests--a case study in northeastern Ontario, Canada.

    Directory of Open Access Journals (Sweden)

    Chunhua Zhang

    Full Text Available With the growth of the low altitude remote sensing (LARS industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV. Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams.

  7. Adoption of an unmanned helicopter for low-altitude remote sensing to estimate yield and total biomass of a rice crop

    Science.gov (United States)

    A radio-controlled unmanned helicopter-based LARS (Low-Altitude Remote Sensing) platform was used to acquire quality images of high spatial and temporal resolution, in order to estimate yield and total biomass of a rice crop (Oriza Sativa, L.). Fifteen rice field plots with five N-treatments (0, 33,...

  8. A low-altitude mountain range as an important refugium for two narrow endemics in the Southwest Australian Floristic Region biodiversity hotspot

    NARCIS (Netherlands)

    Keppel, Gunnar; Robinson, Todd P.; Wardell-Johnson, Grant W.; Yates, Colin J.; Niel, Van Kimberly P.; Byrne, Margaret; Schut, Tom

    2016-01-01

    Background and Aims Low-altitude mountains constitute important centres of diversity in landscapes with little topographic variation, such as the Southwest Australian Floristic Region (SWAFR). They also provide unique climatic and edaphic conditions that may allow them to function as refugia. We

  9. Accurate modeling of UV written waveguide components

    DEFF Research Database (Denmark)

    Svalgaard, Mikael

    BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure.......BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure....

  10. Accurate modelling of UV written waveguide components

    DEFF Research Database (Denmark)

    Svalgaard, Mikael

    BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure.......BPM simulation results of UV written waveguide components that are indistinguishable from measurements can be achieved on the basis of trajectory scan data and an equivalent step index profile that is very easy to measure....

  11. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  12. Ecological Risk Assessment Framework for Low-Altitude Overflights by Fixed-Wing and Rotary-Wing Military Aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Efroymson, R.A.

    2001-01-12

    This is a companion report to the risk assessment framework proposed by Suter et al. (1998): ''A Framework for Assessment of Risks of Military Training and Testing to Natural Resources,'' hereafter referred to as the ''generic framework.'' The generic framework is an ecological risk assessment methodology for use in environmental assessments on Department of Defense (DoD) installations. In the generic framework, the ecological risk assessment framework of the US Environmental Protection Agency (EPA 1998) is modified for use in the context of (1) multiple and diverse stressors and activities at a military installation and (2) risks resulting from causal chains, e.g., effects on habitat that indirectly impact wildlife. Both modifications are important if the EPA framework is to be used on military installations. In order for the generic risk assessment framework to be useful to DoD environmental staff and contractors, the framework must be applied to specific training and testing activities. Three activity-specific ecological risk assessment frameworks have been written (1) to aid environmental staff in conducting risk assessments that involve these activities and (2) to guide staff in the development of analogous frameworks for other DoD activities. The three activities are: (1) low-altitude overflights by fixed-wing and rotary-wing aircraft (this volume), (2) firing at targets on land, and (3) ocean explosions. The activities were selected as priority training and testing activities by the advisory committee for this project.

  13. Tritium permeation model for plasma facing components

    Science.gov (United States)

    Longhurst, G. R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included.

  14. Tritium permeation model for plasma facing components

    International Nuclear Information System (INIS)

    Longhurst, G.R.

    1992-12-01

    This report documents the development of a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. The model is developed for solution using commercial spread-sheet software such as Lotus 123. Comparison calculations are provided with the verified and validated TMAP4 transient code with good agreement. Results of calculations for the ITER CDA diverter are also included

  15. Nitrogen component in nonpoint source pollution models

    Science.gov (United States)

    Pollutants entering a water body can be very destructive to the health of that system. Best Management Practices (BMPs) and/or conservation practices are used to reduce these pollutants, but understanding the most effective practices is very difficult. Watershed models are an effective tool to aid...

  16. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  17. Model-integrating software components engineering flexible software systems

    CERN Document Server

    Derakhshanmanesh, Mahdi

    2015-01-01

    In his study, Mahdi Derakhshanmanesh builds on the state of the art in modeling by proposing to integrate models into running software on the component-level without translating them to code. Such so-called model-integrating software exploits all advantages of models: models implicitly support a good separation of concerns, they are self-documenting and thus improve understandability and maintainability and in contrast to model-driven approaches there is no synchronization problem anymore between the models and the code generated from them. Using model-integrating components, software will be

  18. Modeling money demand components in Lebanon using autoregressive models

    International Nuclear Information System (INIS)

    Mourad, M.

    2008-01-01

    This paper analyses monetary aggregate in Lebanon and its different component methodology of AR model. Thirteen variables in monthly data have been studied for the period January 1990 through December 2005. Using the Augmented Dickey-Fuller (ADF) procedure, twelve variables are integrated at order 1, thus they need the filter (1-B)) to become stationary, however the variable X 1 3,t (claims on private sector) becomes stationary with the filter (1-B)(1-B 1 2) . The ex-post forecasts have been calculated for twelve horizons and for one horizon (one-step ahead forecast). The quality of forecasts has been measured using the MAPE criterion for which the forecasts are good because the MAPE values are lower. Finally, a pursuit of this research using the cointegration approach is proposed. (author)

  19. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae; Top, Søren

    2008-01-01

    , communication and constraints, using computational blocks and aggregates for both discrete and continuous behaviour, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite...... to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set...... of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behaviour, and the transformation of the software system into the S-functions. The general aim of this work is the improvement of multi-disciplinary development of embedded systems with the focus on the relation...

  20. Public health component in building information modeling

    Science.gov (United States)

    Trufanov, A. I.; Rossodivita, A.; Tikhomirov, A. A.; Berestneva, O. G.; Marukhina, O. V.

    2018-05-01

    A building information modelling (BIM) conception has established itself as an effective and practical approach to plan, design, construct, and manage buildings and infrastructure. Analysis of the governance literature has shown that the BIM-developed tools do not take fully into account the growing demands from ecology and health fields. In this connection, it is possible to offer an optimal way of adapting such tools to the necessary consideration of the sanitary and hygienic specifications of materials used in construction industry. It is proposed to do it through the introduction of assessments that meet the requirements of national sanitary standards. This approach was demonstrated in the case study of Revit® program.

  1. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  2. Adding heat to the live-high train-low altitude model: a practical insight from professional football

    Science.gov (United States)

    Buchheit, M; Racinais, S; Bilsborough, J; Hocking, J; Mendez-Villanueva, A; Bourdon, P C; Voss, S; Livingston, S; Christian, R; Périard, J; Cordy, J; Coutts, A J

    2013-01-01

    Objectives To examine with a parallel group study design the performance and physiological responses to a 14-day off-season ‘live high-train low in the heat’ training camp in elite football players. Methods Seventeen professional Australian Rules Football players participated in outdoor football-specific skills (32±1°C, 11.5 h) and indoor strength (23±1°C, 9.3 h) sessions and slept (12 nights) and cycled indoors (4.3 h) in either normal air (NORM, n=8) or normobaric hypoxia (14±1 h/day, FiO2 15.2–14.3%, corresponding to a simulated altitude of 2500–3000 m, hypoxic (HYP), n=9). They completed the Yo-Yo Intermittent Recovery level 2 (Yo-YoIR2) in temperate conditions (23±1°C, normal air) precamp (Pre) and postcamp (Post). Plasma volume (PV) and haemoglobin mass (Hbmass) were measured at similar times and 4 weeks postcamp (4WPost). Sweat sodium concentration ((Na+)sweat) was measured Pre and Post during a heat-response test (44°C). Results Both groups showed very large improvements in Yo-YoIR2 at Post (+44%; 90% CL 38, 50), with no between-group differences in the changes (−1%; −9, 9). Postcamp, large changes in PV (+5.6%; −1.8, 5.6) and (Na+)sweat (−29%; −37, −19) were observed in both groups, while Hbmass only moderately increased in HYP (+2.6%; 0.5, 4.5). At 4WPost, there was a likely slightly greater increase in Hbmass (+4.6%; 0.0, 9.3) and PV (+6%; −5, 18, unclear) in HYP than in NORM. Conclusions The combination of heat and hypoxic exposure during sleep/training might offer a promising ‘conditioning cocktail’ in team sports. PMID:24282209

  3. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  4. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  5. Investigating the Surface and Subsurface in Karstic Regions – Terrestrial Laser Scanning versus Low-Altitude Airborne Imaging and the Combination with Geophysical Prospecting

    Directory of Open Access Journals (Sweden)

    Nora Tilly

    2017-08-01

    Full Text Available Combining measurements of the surface and subsurface is a promising approach to understand the origin and current changes of karstic forms since subterraneous processes are often the initial driving force. A karst depression in south-west Germany was investigated in a comprehensive campaign with remote sensing and geophysical prospecting. This contribution has two objectives: firstly, comparing terrestrial laser scanning (TLS and low-altitude airborne imaging from an unmanned aerial vehicle (UAV regarding their performance in capturing the surface. Secondly, establishing a suitable way of combining this 3D surface data with data from the subsurface, derived by geophysical prospecting. Both remote sensing approaches performed satisfying and the established digital elevation models (DEMs differ only slightly. These minor discrepancies result essentially from the different viewing geometries and post-processing concepts, for example whether the vegetation was removed or not. Validation analyses against high-accurate DGPS-derived point data sets revealed slightly better results for the DEMTLS with a mean absolute difference of 0.03 m to 0.05 m and a standard deviation of 0.03 m to 0.07 m (DEMUAV: mean absolute difference: 0.11 m to 0.13 m; standard deviation: 0.09 m to 0.11 m. The 3D surface data and 2D image of the vertical cross section through the subsurface along a geophysical profile were combined in block diagrams. The data sets fit very well and give a first impression of the connection between surface and subsurface structures. Since capturing the subsurface with this method is limited to 2D and the data acquisition is quite time consuming, further investigations are necessary for reliable statements about subterraneous structures, how these may induce surface changes, and the origin of this karst depression. Moreover, geophysical prospecting can only produce a suspected image of the subsurface since the apparent resistivity is measured

  6. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    International Nuclear Information System (INIS)

    Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V

    2013-01-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance

  7. Components in models of learning: Different operationalisations and relations between components

    Directory of Open Access Journals (Sweden)

    Mirkov Snežana

    2013-01-01

    Full Text Available This paper provides the presentation of different operationalisations of components in different models of learning. Special emphasis is on the empirical verifications of relations between components. Starting from the research of congruence between learning motives and strategies, underlying the general model of school learning that comprises different approaches to learning, we have analyzed the empirical verifications of factor structure of instruments containing the scales of motives and learning strategies corresponding to these motives. Considering the problems in the conceptualization of the achievement approach to learning, we have discussed the ways of operational sing the goal orientations and exploring their role in using learning strategies, especially within the model of the regulation of constructive learning processes. This model has served as the basis for researching learning styles that are the combination of a large number of components. Complex relations between the components point to the need for further investigation of the constructs involved in various models. We have discussed the findings and implications of the studies of relations between the components involved in different models, especially between learning motives/goals and learning strategies. We have analyzed the role of regulation in the learning process, whose elaboration, as indicated by empirical findings, can contribute to a more precise operationalisation of certain learning components. [Projekat Ministarstva nauke Republike Srbije, br. 47008: Unapređivanje kvaliteta i dostupnosti obrazovanja u procesima modernizacije Srbije i br. 179034: Od podsticanja inicijative, saradnje i stvaralaštva u obrazovanju do novih uloga i identiteta u društvu

  8. Models for integrated components coupled with their EM environment

    NARCIS (Netherlands)

    Ioan, D.; Schilders, W.H.A.; Ciuprina, G.; Meijs, van der N.P.; Schoenmaker, W.

    2008-01-01

    Abstract: Purpose – The main aim of this study is the modelling of the interaction of on-chip components with their electromagnetic environment. Design/methodology/approach – The integrated circuit is decomposed in passive and active components interconnected by means of terminals and connectors

  9. The dynamic cusp at low altitudes: A case study combining Viking, DMSP, and Sondrestrom incoherent scatter radar observations

    International Nuclear Information System (INIS)

    Watermann, J.; Delabeaujardiere, O.; Lummerzheim, D.; Woch, J.; Newell, P.T.; Potemra, T.A.; Rich, F.J.; Shapshak, M.

    1992-01-01

    A case study involving data from three satellites and a ground-based radar are presented. Focus is on a detailed discussion of observations of the dynamic cusp made on 24 Sep. 1986 in the dayside high-latitude ionosphere and interior magnetosphere. The relevant data from space-borne and ground-based sensors is presented. They include in-situ particle and field measurements from the DMSP-F7 and Viking spacecraft and Sondrestrom radar observations of the ionosphere. These data are augmented by observations of the IMF and the solar wind plasma. The observations are compared with predictions about the ionospheric response to the observed particle precipitation, obtained from an auroral model. It is shown that observations and model calculations fit well and provide a picture of the ionospheric footprint of the cusp in an invariant latitude versus local time frame. The combination of Viking, Sondrestrom radar, and IMP-8 data suggests that an ionospheric signature of the dynamic cusp was observed. Its spatial variation over time which appeared closely related to the southward component of the IMF was monitored

  10. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  11. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order

  12. Effects of hiking at moderate and low altitude on cardiovascular parameters in male patients with metabolic syndrome: Austrian Moderate Altitude Study.

    Science.gov (United States)

    Neumayr, Günther; Fries, Dietmar; Mittermayer, Markus; Humpeler, Egon; Klingler, Anton; Schobersberger, Wolfgang; Spiesberger, Reinhard; Pokan, Rochus; Schmid, Peter; Berent, Robert

    2014-09-01

    Physical activity is a cornerstone in therapy for patients with metabolic syndrome. Walking and hiking in a mountain scenery represents an ideal approach to make them move. The Austrian Moderate Altitude Study (AMAS) 2000 main study is a randomized controlled trial to investigate the cardiovascular effects of hiking at moderate altitude on patients with metabolic syndrome compared with a control group at low altitude, to assess a potential altitude-specific effect. Seventy-one male patients with metabolic syndrome were randomly assigned to a moderate altitude group (at 1700 m), with 36 participants, or to a low altitude group (at 200 m), with 35 participants. The 3-week vacation program included 12 hiking tours (4 per week, average duration 2.5 hours, intensity 55% to 65% of heart rate maximum). Physical parameters, performance capacity, 24-hour blood pressure, and heart rate profiles were obtained before, during, and after the stay. In both groups, we found a significant mean weight loss of -3.13 kg; changes in performance capacity were minor. Systolic, diastolic, and mean arterial pressures and circadian heart rate profiles were significantly reduced in both groups, with no differences between them. Consequently, the pressure-rate product was reduced as well. All study participants tolerated the vacation well without any adverse events. A 3-week hiking vacation at moderate or low altitude is safe for patients with metabolic syndrome and provides several improvements in their cardiovascular parameters. The cardiovascular benefits achieved are more likely to be the result of regular physical activity than the altitude-specific effect of a mountain environment. Copyright © 2014 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.

  13. Current flow and pair creation at low altitude in rotation-powered pulsars' force-free magnetospheres: space charge limited flow

    Science.gov (United States)

    Timokhin, A. N.; Arons, J.

    2013-02-01

    We report the results of an investigation of particle acceleration and electron-positron plasma generation at low altitude in the polar magnetic flux tubes of rotation-powered pulsars, when the stellar surface is free to emit whatever charges and currents are demanded by the force-free magnetosphere. We apply a new 1D hybrid plasma simulation code to the dynamical problem, using Particle-in-Cell methods for the dynamics of the charged particles, including a determination of the collective electrostatic fluctuations in the plasma, combined with a Monte Carlo treatment of the high-energy gamma-rays that mediate the formation of the electron-positron pairs. We assume the electric current flowing through the pair creation zone is fixed by the much higher inductance magnetosphere, and adopt the results of force-free magnetosphere models to provide the currents which must be carried by the accelerator. The models are spatially one dimensional, and designed to explore the physics, although of practical relevance to young, high-voltage pulsars. We observe novel behaviour (a) When the current density j is less than the Goldreich-Julian value (0 electrically trapped particles with the same sign of charge as the beam. The voltage drops are of the order of mc2/e, and pair creation is absent. (b) When the current density exceeds the Goldreich-Julian value (j/jGJ > 1), the system develops high voltage drops (TV or greater), causing emission of curvature gamma-rays and intense bursts of pair creation. The bursts exhibit limit cycle behaviour, with characteristic time-scales somewhat longer than the relativistic fly-by time over distances comparable to the polar cap diameter (microseconds). (c) In return current regions, where j/jGJ generated pairs allow the system to simultaneously carry the magnetospherically prescribed currents and adjust the charge density and average electric field to force-free conditions. We also elucidate the conditions for pair creating beam flow to be

  14. Investigation of the Crust of the Pannonian Basin, Hungary Using Low-Altitude CHAMP Horizontal Gradient Magnetic Anomalies

    Science.gov (United States)

    Taylor, Patrick T.; Kis, Karoly I.; Puszta, Sandor; Wittmann, Geza; Kim, Hyung Rae; Toronyi, B.

    2011-01-01

    The Pannonian Basin is a deep intra-continental basin that formed as part of the Alpine orogeny. It is some 600 by 500 km in area and centered on Hungary. This area was chosen since it has one of the thinnest continental crusts in Europe and is the region of complex tectonic structures. In order to study the nature of the crustal basement we used the long-wavelength magnetic anomalies acquired by the CHAMP satellite. The SWARM constellation, scheduled to be launched next year, will have two lower altitude satellites flying abreast, with a separation of between ca. 150 to 200 km. to record the horizontal magnetic gradient. Since the CHAMP satellite has been in orbit for eight years and has obtained an extensive range of data, both vertically and horizontally there is a large enough data base to compute the horizontal magnetic gradients over the Pannonian Basin region using these many CHAMP orbits. We recomputed a satellite magnetic anomaly map, using the spherical-cap method of Haines (1985), the technique of Alsdorf et al. (1994) and from spherical harmonic coefficients of MF6 (Maus et aI., 2008) employing the latest and lowest altitude CHAMP data. We then computed the horizontal magnetic anomaly gradients (Kis and Puszta, 2006) in order to determine how these component data will improve our interpretation and to preview what the SW ARM mission will reveal with reference to the horizontal gradient anomalies. The gradient amplitude of an 1000 km northeast-southwest profile through our horizontal component anomaly map varied from 0 to 0.025 nT/km with twin positive anomalies (0.025 and 0.023 nT/km) separated by a sharp anomaly negative at o nT/km. Horizontal gradient indicate major magnetization boundaries in the crust (Dole and Jordan, 1978 and Cordell and Grauch, 1985). Our gradient anomaly was modeled with a twodimensional body and the anomaly, of some 200 km, correlates with a 200 km area of crustal thinning in the southwestern Pannonian Basin.

  15. Robustness of Component Models in Energy System Simulators

    DEFF Research Database (Denmark)

    Elmegaard, Brian

    2003-01-01

    During the development of the component-based energy system simulator DNA (Dynamic Network Analysis), several obstacles to easy use of the program have been observed. Some of these have to do with the nature of the program being based on a modelling language, not a graphical user interface (GUI......). Others have to do with the interaction between models of the nature of the substances in an energy system (e.g., fuels, air, flue gas), models of the components in a system (e.g., heat exchangers, turbines, pumps), and the solver for the system of equations. This paper proposes that the interaction...

  16. Option valuation with the simplified component GARCH model

    DEFF Research Database (Denmark)

    Dziubinski, Matt P.

    We introduce the Simplified Component GARCH (SC-GARCH) option pricing model, show and discuss sufficient conditions for non-negativity of the conditional variance, apply it to low-frequency and high-frequency financial data, and consider the option valuation, comparing the model performance...

  17. Integrating environmental component models. Development of a software framework

    NARCIS (Netherlands)

    Schmitz, O.

    2014-01-01

    Integrated models consist of interacting component models that represent various natural and social systems. They are important tools to improve our understanding of environmental systems, to evaluate cause–effect relationships of human–natural interactions, and to forecast the behaviour of

  18. Effects of 12-Week Endurance Training at Natural Low Altitude on the Blood Redox Homeostasis of Professional Adolescent Athletes: A Quasi-Experimental Field Trial

    Directory of Open Access Journals (Sweden)

    Tomas K. Tong

    2016-01-01

    Full Text Available This field study investigated the influences of exposure to natural low altitude on endurance training-induced alterations of redox homeostasis in professional adolescent runners undergoing 12-week off-season conditioning program at an altitude of 1700 m (Alt, by comparison with that of their counterparts completing the program at sea-level (SL. For age-, gender-, and Tanner-stage-matched comparison, 26 runners (n=13 in each group were selected and studied. Following the conditioning program, unaltered serum levels of thiobarbituric acid reactive substances (TBARS, total antioxidant capacity (T-AOC, and superoxide dismutase accompanied with an increase in oxidized glutathione (GSSG and decreases of xanthine oxidase, reduced glutathione (GSH, and GSH/GSSG ratio were observed in both Alt and SL groups. Serum glutathione peroxidase and catalase did not change in SL, whereas these enzymes, respectively, decreased and increased in Alt. Uric acid (UA decreased in SL and increased in Alt. Moreover, the decreases in GSH and GSH/GSSG ratio in Alt were relatively lower compared to those in SL. Further, significant interindividual correlations were found between changes in catalase and TBARS, as well as between UA and T-AOC. These findings suggest that long-term training at natural low altitude is unlikely to cause retained oxidative stress in professional adolescent runners.

  19. Effects of 12-Week Endurance Training at Natural Low Altitude on the Blood Redox Homeostasis of Professional Adolescent Athletes: A Quasi-Experimental Field Trial.

    Science.gov (United States)

    Tong, Tomas K; Kong, Zhaowei; Lin, Hua; He, Yeheng; Lippi, Giuseppe; Shi, Qingde; Zhang, Haifeng; Nie, Jinlei

    2016-01-01

    This field study investigated the influences of exposure to natural low altitude on endurance training-induced alterations of redox homeostasis in professional adolescent runners undergoing 12-week off-season conditioning program at an altitude of 1700 m (Alt), by comparison with that of their counterparts completing the program at sea-level (SL). For age-, gender-, and Tanner-stage-matched comparison, 26 runners (n = 13 in each group) were selected and studied. Following the conditioning program, unaltered serum levels of thiobarbituric acid reactive substances (TBARS), total antioxidant capacity (T-AOC), and superoxide dismutase accompanied with an increase in oxidized glutathione (GSSG) and decreases of xanthine oxidase, reduced glutathione (GSH), and GSH/GSSG ratio were observed in both Alt and SL groups. Serum glutathione peroxidase and catalase did not change in SL, whereas these enzymes, respectively, decreased and increased in Alt. Uric acid (UA) decreased in SL and increased in Alt. Moreover, the decreases in GSH and GSH/GSSG ratio in Alt were relatively lower compared to those in SL. Further, significant interindividual correlations were found between changes in catalase and TBARS, as well as between UA and T-AOC. These findings suggest that long-term training at natural low altitude is unlikely to cause retained oxidative stress in professional adolescent runners.

  20. Component and system simulation models for High Flux Isotope Reactor

    International Nuclear Information System (INIS)

    Sozer, A.

    1989-08-01

    Component models for the High Flux Isotope Reactor (HFIR) have been developed. The models are HFIR core, heat exchangers, pressurizer pumps, circulation pumps, letdown valves, primary head tank, generic transport delay (pipes), system pressure, loop pressure-flow balance, and decay heat. The models were written in FORTRAN and can be run on different computers, including IBM PCs, as they do not use any specific simulation languages such as ACSL or CSMP. 14 refs., 13 figs

  1. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos

    2012-07-01

    We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis. © 2012 ACM 0730-0301/2012/08-ART55.

  2. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  3. Modeling fabrication of nuclear components: An integrative approach

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.

    1996-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components in an environment of intense regulation and shrinking budgets. This dissertation presents an integrative two-stage approach to modeling the casting operation for fabrication of nuclear weapon primary components. The first stage optimizes personnel radiation exposure for the casting operation layout by modeling the operation as a facility layout problem formulated as a quadratic assignment problem. The solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  4. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    Alammouri, Ahmad; Elsawy, Hesham; Salem, Ahmed Sultan; Di Renzo, Marco; Alouini, Mohamed-Slim

    2016-01-01

    to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks

  5. Modeling the evaporation of sessile multi-component droplets

    NARCIS (Netherlands)

    Diddens, C.; Kuerten, Johannes G.M.; van der Geld, C.W.M.; Wijshoff, H.M.A.

    2017-01-01

    We extended a mathematical model for the drying of sessile droplets, based on the lubrication approximation, to binary mixture droplets. This extension is relevant for e.g. inkjet printing applications, where ink consisting of several components are used. The extension involves the generalization of

  6. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  7. Do Knowledge-Component Models Need to Incorporate Representational Competencies?

    Science.gov (United States)

    Rau, Martina Angela

    2017-01-01

    Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…

  8. Hybrid time/frequency domain modeling of nonlinear components

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...

  9. Data and information needs for WPP testing and component modeling

    International Nuclear Information System (INIS)

    Kuhn, W.L.

    1987-01-01

    The modeling task of the Waste Package Program (WPP) is to develop conceptual models that describe the interactions of waste package components with their environment and the interactions among waste package components. The task includes development and maintenance of a database of experimental data, and statistical analyses to fit model coefficients, test the significance of the fits, and propose experimental designs. The modeling task collaborates with experimentalists to apply physicochemical principles to develop the conceptual models, with emphasis on the subsequent mathematical development. The reason for including the modeling task in the predominantly experimental WPP is to keep the modeling of component behavior closely associated with the experimentation. Whenever possible, waste package degradation processes are described in terms of chemical reactions or transport processes. The integration of equations for assumed or calculated repository conditions predicts variations with time in the repository. Within the context of the waste package program, the composition and rate of arrival of brine to the waste package are environmental variables. These define the environment to be simulated or explored during waste package component and interactions testing. The containment period is characterized by rapid changes in temperature, pressure, oxygen fugacity, and salt porosity. Brine migration is expected to be most rapid during this period. The release period is characterized by modest and slowly changing temperatures, high pressure, low oxygen fugacity, and low porosity. The need is to define the scenario within which waste package degradation calculations are to be made and to quantify the rate of arrival and composition of the brine. Appendix contains 4 vugraphs

  10. Sparse Principal Component Analysis in Medical Shape Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus

    2006-01-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...

  11. Are small-scale field-aligned currents and magneto sheath-like particle precipitation signatures of the same low-altitude cusp?

    DEFF Research Database (Denmark)

    Watermann, J.; Stauning, P.; Luhr, H.

    2009-01-01

    We examined some 75 observations from the low-altitude Earth orbiting DMSP, Orsted and CHAMP satellites which were taken in the region of the nominal cusp. Our objective was to determine whether the actually observed cusp locations as inferred from magnetosheath-like particle precipitation...... ("particle cusp") and intense small-scale magnetic field variations ("current cusp"), respectively, were identical and were consistent with the statistically expected latitude of the cusp derived from a huge number of charged particle spectrograms ("statistical cusp"). The geocentric coordinates...... of the satellites were converted into AACGM coordinates, and the geomagnetic latitude of the cusp boundaries (as indicated by precipitating particles and small-scale field-aligned currents) set in relation to the IMF-B-z dependent latitude of the equatorward boundary of the statistical cusp. We find...

  12. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    AlAmmouri, Ahmad

    2016-07-26

    Stochastic geometry (SG) has been widely accepted as a fundamental tool for modeling and analyzing cellular networks. However, the fading models used with SG analysis are mainly confined to the simplistic Rayleigh fading, which is extended to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks with generalized two-ray (GTR) fading channel. The GTR fading explicitly accounts for two DSCs in addition to the diffuse components and offers high flexibility to capture diverse fading channels that appear in realistic outdoor/indoor wireless communication scenarios. It also encompasses the famous Rayleigh and Rician fading as special cases. To this end, the prominent effect of DSCs is highlighted in terms of average spectral efficiency. © 2016 IEEE.

  13. Cognitive components underpinning the development of model-based learning.

    Science.gov (United States)

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Models for describing the thermal characteristics of building components

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik

    2008-01-01

    , for example. For the analysis of these tests, dynamic analysis models and methods are required. However, a wide variety of models and methods exists, and the problem of choosing the most appropriate approach for each particular case is a non-trivial and interdisciplinary task. Knowledge of a large family....... The characteristics of each type of model are highlighted. Some available software tools for each of the methods described will be mentioned. A case study also demonstrating the difference between linear and nonlinear models is considered....... of these approaches may therefore be very useful for selecting a suitable approach for each particular case. This paper presents an overview of models that can be applied for modelling the thermal characteristics of buildings and building components using data from outdoor testing. The choice of approach depends...

  15. Formal Model-Driven Engineering: Generating Data and Behavioural Components

    Directory of Open Access Journals (Sweden)

    Chen-Wei Wang

    2012-12-01

    Full Text Available Model-driven engineering is the automatic production of software artefacts from abstract models of structure and functionality. By targeting a specific class of system, it is possible to automate aspects of the development process, using model transformations and code generators that encode domain knowledge and implementation strategies. Using this approach, questions of correctness for a complex, software system may be answered through analysis of abstract models of lower complexity, under the assumption that the transformations and generators employed are themselves correct. This paper shows how formal techniques can be used to establish the correctness of model transformations used in the generation of software components from precise object models. The source language is based upon existing, formal techniques; the target language is the widely-used SQL notation for database programming. Correctness is established by giving comparable, relational semantics to both languages, and checking that the transformations are semantics-preserving.

  16. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  17. A minimal model for two-component dark matter

    International Nuclear Information System (INIS)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z_2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  18. Evaluation of the RELAP5/MOD3 multidimensional component model

    International Nuclear Information System (INIS)

    Tomlinson, E.T.; Rens, T.E.; Coffield, R.D.

    1994-01-01

    Accurate plenum predictions, which are directly related to the mixing models used, are an important plant modeling consideration because of the consequential impact on basic transient performance calculations for the integrated system. The effect of plenum is a time shift between inlet and outlet temperature changes to the particular volume. Perfect mixing, where the total volume interacts instantaneously with the total inlet flow, does not occur because of effects such as inlet/outlet nozzle jetting, flow stratification, nested vortices within the volume and the general three-dimensional velocity distribution of the flow field. The time lag which exists between the inlet and outlet flows impacts the predicted rate of temperature change experienced by various plant system components and this impacts local component analyses which are affected by the rate of temperature change. This study includes a comparison of two-dimensional plenum mixing predictions using CFD-FLOW3D, RELAP5/MOD3 and perfect mixing models. Three different geometries (flat, square and tall) are assessed for scalar transport times using a wide range of inlet velocity and isothermal conditions. In addition, the three geometries were evaluated for low flow conditions with the inlet flow experiencing a large step temperature decrease. A major conclusion from this study is that the RELAP5/MOD3 multidimensional component model appears to be adequately predicting plenum mixing for a wide range of thermal-hydraulic conditions representative of plant transients

  19. Evaluating fugacity models for trace components in landfill gas

    Energy Technology Data Exchange (ETDEWEB)

    Shafi, Sophie [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Sweetman, Andrew [Department of Environmental Science, Lancaster University, Lancaster LA1 4YQ (United Kingdom); Hough, Rupert L. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Smith, Richard [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Rosevear, Alan [Science Group - Waste and Remediation, Environment Agency, Reading RG1 8DQ (United Kingdom); Pollard, Simon J.T. [Integrated Waste Management Centre, Sustainable Systems Department, Building 61, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedfordshire MK43 0AL (United Kingdom)]. E-mail: s.pollard@cranfield.ac.uk

    2006-12-15

    A fugacity approach was evaluated to reconcile loadings of vinyl chloride (chloroethene), benzene, 1,3-butadiene and trichloroethylene in waste with concentrations observed in landfill gas monitoring studies. An evaluative environment derived from fictitious but realistic properties such as volume, composition, and temperature, constructed with data from the Brogborough landfill (UK) test cells was used to test a fugacity approach to generating the source term for use in landfill gas risk assessment models (e.g. GasSim). SOILVE, a dynamic Level II model adapted here for landfills, showed greatest utility for benzene and 1,3-butadiene, modelled under anaerobic conditions over a 10 year simulation. Modelled concentrations of these components (95 300 {mu}g m{sup -3}; 43 {mu}g m{sup -3}) fell within measured ranges observed in gas from landfills (24 300-180 000 {mu}g m{sup -3}; 20-70 {mu}g m{sup -3}). This study highlights the need (i) for representative and time-referenced biotransformation data; (ii) to evaluate the partitioning characteristics of organic matter within waste systems and (iii) for a better understanding of the role that gas extraction rate (flux) plays in producing trace component concentrations in landfill gas. - Fugacity for trace component in landfill gas.

  20. Traceable components of terrestrial carbon storage capacity in biogeochemical models.

    Science.gov (United States)

    Xia, Jianyang; Luo, Yiqi; Wang, Ying-Ping; Hararuk, Oleksandra

    2013-07-01

    Biogeochemical models have been developed to account for more and more processes, making their complex structures difficult to be understood and evaluated. Here, we introduce a framework to decompose a complex land model into traceable components based on mutually independent properties of modeled biogeochemical processes. The framework traces modeled ecosystem carbon storage capacity (Xss ) to (i) a product of net primary productivity (NPP) and ecosystem residence time (τE ). The latter τE can be further traced to (ii) baseline carbon residence times (τ'E ), which are usually preset in a model according to vegetation characteristics and soil types, (iii) environmental scalars (ξ), including temperature and water scalars, and (iv) environmental forcings. We applied the framework to the Australian Community Atmosphere Biosphere Land Exchange (CABLE) model to help understand differences in modeled carbon processes among biomes and as influenced by nitrogen processes. With the climate forcings of 1990, modeled evergreen broadleaf forest had the highest NPP among the nine biomes and moderate residence times, leading to a relatively high carbon storage capacity (31.5 kg cm(-2) ). Deciduous needle leaf forest had the longest residence time (163.3 years) and low NPP, leading to moderate carbon storage (18.3 kg cm(-2) ). The longest τE in deciduous needle leaf forest was ascribed to its longest τ'E (43.6 years) and small ξ (0.14 on litter/soil carbon decay rates). Incorporation of nitrogen processes into the CABLE model decreased Xss in all biomes via reduced NPP (e.g., -12.1% in shrub land) or decreased τE or both. The decreases in τE resulted from nitrogen-induced changes in τ'E (e.g., -26.7% in C3 grassland) through carbon allocation among plant pools and transfers from plant to litter and soil pools. Our framework can be used to facilitate data model comparisons and model intercomparisons via tracking a few traceable components for all terrestrial carbon

  1. Scale modeling flow-induced vibrations of reactor components

    International Nuclear Information System (INIS)

    Mulcahy, T.M.

    1982-06-01

    Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response

  2. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  3. Modelling raster-based monthly water balance components for Europe

    Energy Technology Data Exchange (ETDEWEB)

    Ulmen, C.

    2000-11-01

    The terrestrial runoff component is a comparatively small but sensitive and thus significant quantity in the global energy and water cycle at the interface between landmass and atmosphere. As opposed to soil moisture and evapotranspiration which critically determine water vapour fluxes and thus water and energy transport, it can be measured as an integrated quantity over a large area, i.e. the river basin. This peculiarity makes terrestrial runoff ideally suited for the calibration, verification and validation of general circulation models (GCMs). Gauging stations are not homogeneously distributed in space. Moreover, time series are not necessarily continuously measured nor do they in general have overlapping time periods. To overcome this problems with regard to regular grid spacing used in GCMs, different methods can be applied to transform irregular data to regular so called gridded runoff fields. The present work aims to directly compute the gridded components of the monthly water balance (including gridded runoff fields) for Europe by application of the well-established raster-based macro-scale water balance model WABIMON used at the Federal Institute of Hydrology, Germany. Model calibration and validation is performed by separated examination of 29 representative European catchments. Results indicate a general applicability of the model delivering reliable overall patterns and integrated quantities on a monthly basis. For time steps less then too weeks further research and structural improvements of the model are suggested. (orig.)

  4. No Change in Running Mechanics With Live High-Train Low Altitude Training in Elite Distance Runners.

    Science.gov (United States)

    Stickford, Abigail S L; Wilhite, Daniel P; Chapman, Robert F

    2017-01-01

    Investigations into ventilatory, metabolic, and hematological changes with altitude training have been completed; however, there is a lack of research exploring potential gait-kinematic changes after altitude training, despite a common complaint of athletes being a lack of leg "turnover" on return from altitude training. To determine if select kinematic variables changed in a group of elite distance runners after 4 wk of altitude training. Six elite male distance runners completed a 28-d altitude-training intervention in Flagstaff, AZ (2150 m), following a modified "live high-train low" model, wherein higherintensity runs were performed at lower altitudes (945-1150 m) and low-intensity sessions were completed at higher altitudes (1950-2850 m). Gait parameters were measured 2-9 d before departure to altitude and 1 to 2 d after returning to sea level at running speeds of 300-360 m/min. No differences were found in ground-contact time, swing time, or stride length or frequency after altitude training (P > .05). Running mechanics are not affected by chronic altitude training in elite distance runners. The data suggest that either chronic training at altitude truly has no effect on running mechanics or completing the live high-train low model of altitude training, where higher-velocity workouts are completed at lower elevations, mitigates any negative mechanical adaptations that may be associated with chronic training at slower speeds.

  5. Three-Component Forward Modeling for Transient Electromagnetic Method

    Directory of Open Access Journals (Sweden)

    Bin Xiong

    2010-01-01

    Full Text Available In general, the time derivative of vertical magnetic field is considered only in the data interpretation of transient electromagnetic (TEM method. However, to survey in the complex geology structures, this conventional technique has begun gradually to be unsatisfied with the demand of field exploration. To improve the integrated interpretation precision of TEM, it is necessary to study the three-component forward modeling and inversion. In this paper, a three-component forward algorithm for 2.5D TEM based on the independent electric and magnetic field has been developed. The main advantage of the new scheme is that it can reduce the size of the global system matrix to the utmost extent, that is to say, the present is only one fourth of the conventional algorithm. In order to illustrate the feasibility and usefulness of the present algorithm, several typical geoelectric models of the TEM responses produced by loop sources at air-earth interface are presented. The results of the numerical experiments show that the computation speed of the present scheme is increased obviously and three-component interpretation can get the most out of the collected data, from which we can easily analyze or interpret the space characteristic of the abnormity object more comprehensively.

  6. Integrated modelling of the edge plasma and plasma facing components

    International Nuclear Information System (INIS)

    Coster, D.P.; Bonnin, X.; Mutzke, A.; Schneider, R.; Warrier, M.

    2007-01-01

    Modelling of the interaction between the edge plasma and plasma facing components (PFCs) has tended to place more emphasis on either the plasma or the PFCs. Either the PFCs do not change with time and the plasma evolution is studied, or the plasma is assumed to remain static and the detailed interaction of the plasma and the PFCs are examined, with no back-reaction on the plasma taken into consideration. Recent changes to the edge simulation code, SOLPS, now allow for changes in both the plasma and the PFCs to be considered. This has been done by augmenting the code to track the time-development of the properties of plasma facing components (PFCs). Results of standard mixed-materials scenarios (base and redeposited C; Be) are presented

  7. Two-component scattering model and the electron density spectrum

    Science.gov (United States)

    Zhou, A. Z.; Tan, J. Y.; Esamdin, A.; Wu, X. J.

    2010-02-01

    In this paper, we discuss a rigorous treatment of the refractive scintillation caused by a two-component interstellar scattering medium and a Kolmogorov form of density spectrum. It is assumed that the interstellar scattering medium is composed of a thin-screen interstellar medium (ISM) and an extended interstellar medium. We consider the case that the scattering of the thin screen concentrates in a thin layer represented by a δ function distribution and that the scattering density of the extended irregular medium satisfies the Gaussian distribution. We investigate and develop equations for the flux density structure function corresponding to this two-component ISM geometry in the scattering density distribution and compare our result with the observations. We conclude that the refractive scintillation caused by this two-component ISM scattering gives a more satisfactory explanation for the observed flux density variation than does the single extended medium model. The level of refractive scintillation is strongly sensitive to the distribution of scattering material along the line of sight (LOS). The theoretical modulation indices are comparatively less sensitive to the scattering strength of the thin-screen medium, but they critically depend on the distance from the observer to the thin screen. The logarithmic slope of the structure function is sensitive to the scattering strength of the thin-screen medium, but is relatively insensitive to the thin-screen location. Therefore, the proposed model can be applied to interpret the structure functions of flux density observed in pulsar PSR B2111 + 46 and PSR B0136 + 57. The result suggests that the medium consists of a discontinuous distribution of plasma turbulence embedded in the interstellar medium. Thus our work provides some insight into the distribution of the scattering along the LOS to the pulsar PSR B2111 + 46 and PSR B0136 + 57.

  8. A multi-component evaporation model for beam melting processes

    Science.gov (United States)

    Klassen, Alexander; Forster, Vera E.; Körner, Carolin

    2017-02-01

    In additive manufacturing using laser or electron beam melting technologies, evaporation losses and changes in chemical composition are known issues when processing alloys with volatile elements. In this paper, a recently described numerical model based on a two-dimensional free surface lattice Boltzmann method is further developed to incorporate the effects of multi-component evaporation. The model takes into account the local melt pool composition during heating and fusion of metal powder. For validation, the titanium alloy Ti-6Al-4V is melted by selective electron beam melting and analysed using mass loss measurements and high-resolution microprobe imaging. Numerically determined evaporation losses and spatial distributions of aluminium compare well with experimental data. Predictions of the melt pool formation in bulk samples provide insight into the competition between the loss of volatile alloying elements from the irradiated surface and their advective redistribution within the molten region.

  9. Flexible Multibody Systems Models Using Composite Materials Components

    International Nuclear Information System (INIS)

    Neto, Maria Augusta; Ambr'osio, Jorge A. C.; Leal, Rog'erio Pereira

    2004-01-01

    The use of a multibody methodology to describe the large motion of complex systems that experience structural deformations enables to represent the complete system motion, the relative kinematics between the components involved, the deformation of the structural members and the inertia coupling between the large rigid body motion and the system elastodynamics. In this work, the flexible multibody dynamics formulations of complex models are extended to include elastic components made of composite materials, which may be laminated and anisotropic. The deformation of any structural member must be elastic and linear, when described in a coordinate frame fixed to one or more material points of its domain, regardless of the complexity of its geometry. To achieve the proposed flexible multibody formulation, a finite element model for each flexible body is used. For the beam composite material elements, the sections properties are found using an asymptotic procedure that involves a two-dimensional finite element analysis of their cross-section. The equations of motion of the flexible multibody system are solved using an augmented Lagrangian formulation and the accelerations and velocities are integrated in time using a multi-step multi-order integration algorithm based on the Gear method

  10. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  11. Modeling for thermodynamic activities of components in simulated reprocessing solutions

    International Nuclear Information System (INIS)

    Sasahira, Akira; Hoshikawa, Tadahiro; Kawamura, Fumio

    1992-01-01

    Analyses of chemical reactions have been widely carried out for soluble fission products encountered in nuclear fuel reprocessing. For detailed analyses of reactions, a prediction of the activity or activity coefficient for nitric acid, water, and several nitrates of fission products is needed. An idea for the predicted nitric acid activity was presented earlier. The model, designated the hydration model, does not predict the nitrate activity. It did, however, suggest that the activity of water would be a function of nitric acid activity but not the molar fraction of water. If the activities of nitric acid and water are accurately predicted, the activity of the last component, nitrate, can be calculated using the Gibbs-Duhem relation for chemical potentials. Therefore, in this study, the earlier hydration model was modified to evaluate the water activity more accurately. The modified model was experimentally examined in stimulated reprocessing solutions. It is concluded that the modified model was suitable for water activity, but further improvement was needed for the activity evaluation of nitric acid in order to calculate the nitrate activity

  12. Low Altitude Emission (LAE) of Energetic Neutral Atoms (ENA) Observed by TWINS and its Relation to the CINEMA CubeSat Mission

    Science.gov (United States)

    Bazell, D.; Sotirelis, T.; Nair, H.; Roelof, E. C.; Brandt, P. C.

    2009-12-01

    The brightest source of energetic neutral atoms (ENAs) at energies >1keV is low altitude emission (LAE) from ~200-400km near auroral latitudes where precipitating energetic ions undergo multiple atomic collisions with the monatomic (O) exosphere. This emission is many times brighter than that from the high-altitude ring current region where the energetic ions interact only weakly with the much less dense monatomic (H) hydrogen geocorona. The recently selected NSF CubeSat mission CINEMA [Lin et al., this special session] has, as part of its science payload (STEIN), an ENA imager covering energies 4-100keV. From a high-inclination ~800km orbit, STEIN will view the LAE four times during every 90 minutes. The NASA TWINS stereo ENA imagers (2-40keV) will also view the LAE from their Molniya orbits (apogee radius~7Re). We have been analyzing the TWINS ENA images of LAE and comparing them with in situ ion measurements (1-40keV) from DMSP spacecraft when their tracks take them under the ion precipitation regions imaged by TWINS. We have developed an ENA emissivity function that relates the directionally-dependent emergent ENA spectrum to that of the precipitating ions. The TWINS/DMSP direct comparisons show good agreement. We offer suggestions on joint observing strategies for CINEMA, TWINS and DMSP after the CINEMA launch in the second half of 2011.

  13. Understanding science teacher enhancement programs: Essential components and a model

    Science.gov (United States)

    Spiegel, Samuel Albert

    Researchers and practioners alike recognize that "the national goal that every child in the United States has access to high-quality school education in science and mathematics cannot be realized without the availability of effective professional development of teachers" (Hewson, 1997, p. 16). Further, there is a plethora of reports calling for the improvement of professional development efforts (Guskey & Huberman, 1995; Kyle, 1995; Loucks-Horsley, Hewson, Love, & Stiles, 1997). In this study I analyze a successful 3-year teacher enhancement program, one form of professional development, to: (1) identify essential components of an effective teacher enhancement program; and (2) create a model to identify and articulate the critical issues in designing, implementing, and evaluating teacher enhancement programs. Five primary sources of information were converted into data: (1) exit questionnaires, (2) exit surveys, (3) exit interview transcripts, (4) focus group transcripts, and (5) other artifacts. Additionally, a focus group was used to conduct member checks. Data were analyzed in an iterative process which led to the development of the list of essential components. The Components are categorized by three organizers: Structure (e.g., science research experience, a mediator throughout the program), Context (e.g., intensity, collaboration), and Participant Interpretation (e.g., perceived to be "safe" to examine personal beliefs and practices, actively engaged). The model is based on: (1) a 4-year study of a successful teacher enhancement program; (2) an analysis of professional development efforts reported in the literature; and (3) reflective discussions with implementors, evaluators, and participants of professional development programs. The model consists of three perspectives, cognitive, symbolic interaction, and organizational, representing different viewpoints from which to consider issues relevant to the success of a teacher enhancement program. These

  14. Modeling Organic Contaminant Desorption from Municipal Solid Waste Components

    Science.gov (United States)

    Knappe, D. R.; Wu, B.; Barlaz, M. A.

    2002-12-01

    Approximately 25% of the sites on the National Priority List (NPL) of Superfund are municipal landfills that accepted hazardous waste. Unlined landfills typically result in groundwater contamination, and priority pollutants such as alkylbenzenes are often present. To select cost-effective risk management alternatives, better information on factors controlling the fate of hydrophobic organic contaminants (HOCs) in landfills is required. The objectives of this study were (1) to investigate the effects of HOC aging time, anaerobic sorbent decomposition, and leachate composition on HOC desorption rates, and (2) to simulate HOC desorption rates from polymers and biopolymer composites with suitable diffusion models. Experiments were conducted with individual components of municipal solid waste (MSW) including polyvinyl chloride (PVC), high-density polyethylene (HDPE), newsprint, office paper, and model food and yard waste (rabbit food). Each of the biopolymer composites (office paper, newsprint, rabbit food) was tested in both fresh and anaerobically decomposed form. To determine the effects of aging on alkylbenzene desorption rates, batch desorption tests were performed after sorbents were exposed to toluene for 30 and 250 days in flame-sealed ampules. Desorption tests showed that alkylbenzene desorption rates varied greatly among MSW components (PVC slowest, fresh rabbit food and newsprint fastest). Furthermore, desorption rates decreased as aging time increased. A single-parameter polymer diffusion model successfully described PVC and HDPE desorption data, but it failed to simulate desorption rate data for biopolymer composites. For biopolymer composites, a three-parameter biphasic polymer diffusion model was employed, which successfully simulated both the initial rapid and the subsequent slow desorption of toluene. Toluene desorption rates from MSW mixtures were predicted for typical MSW compositions in the years 1960 and 1997. For the older MSW mixture, which had a

  15. Modeling photoionization of aqueous DNA and its components.

    Science.gov (United States)

    Pluhařová, Eva; Slavíček, Petr; Jungwirth, Pavel

    2015-05-19

    Radiation damage to DNA is usually considered in terms of UVA and UVB radiation. These ultraviolet rays, which are part of the solar spectrum, can indeed cause chemical lesions in DNA, triggered by photoexcitation particularly in the UVB range. Damage can, however, be also caused by higher energy radiation, which can ionize directly the DNA or its immediate surroundings, leading to indirect damage. Thanks to absorption in the atmosphere, the intensity of such ionizing radiation is negligible in the solar spectrum at the surface of Earth. Nevertheless, such an ionizing scenario can become dangerously plausible for astronauts or flight personnel, as well as for persons present at nuclear power plant accidents. On the beneficial side, ionizing radiation is employed as means for destroying the DNA of cancer cells during radiation therapy. Quantitative information about ionization of DNA and its components is important not only for DNA radiation damage, but also for understanding redox properties of DNA in redox sensing or labeling, as well as charge migration along the double helix in nanoelectronics applications. Until recently, the vast majority of experimental and computational data on DNA ionization was pertinent to its components in the gas phase, which is far from its native aqueous environment. The situation has, however, changed for the better due to the advent of photoelectron spectroscopy in liquid microjets and its most recent application to photoionization of aqueous nucleosides, nucleotides, and larger DNA fragments. Here, we present a consistent and efficient computational methodology, which allows to accurately evaluate ionization energies and model photoelectron spectra of aqueous DNA and its individual components. After careful benchmarking, the method based on density functional theory and its time-dependent variant with properly chosen hybrid functionals and polarizable continuum solvent model provides ionization energies with accuracy of 0.2-0.3 e

  16. On combined gravity gradient components modelling for applied geophysics

    International Nuclear Information System (INIS)

    Veryaskin, Alexey; McRae, Wayne

    2008-01-01

    Gravity gradiometry research and development has intensified in recent years to the extent that technologies providing a resolution of about 1 eotvos per 1 second average shall likely soon be available for multiple critical applications such as natural resources exploration, oil reservoir monitoring and defence establishment. Much of the content of this paper was composed a decade ago, and only minor modifications were required for the conclusions to be just as applicable today. In this paper we demonstrate how gravity gradient data can be modelled, and show some examples of how gravity gradient data can be combined in order to extract valuable information. In particular, this study demonstrates the importance of two gravity gradient components, Txz and Tyz, which, when processed together, can provide more information on subsurface density contrasts than that derived solely from the vertical gravity gradient (Tzz)

  17. Modelling safety of multistate systems with ageing components

    Energy Technology Data Exchange (ETDEWEB)

    Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna [Gdynia Maritime University, Department of Mathematics ul. Morska 81-87, Gdynia 81-225 Poland (Poland)

    2016-06-08

    An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics of the consecutive “m out of n: F” is presented as well.

  18. Modelling safety of multistate systems with ageing components

    International Nuclear Information System (INIS)

    Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna

    2016-01-01

    An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics of the consecutive “m out of n: F” is presented as well.

  19. Component vibration of VVER-reactors - diagnostics and modelling

    International Nuclear Information System (INIS)

    Altstadt, E.; Scheffler, M.; Weiss, F.-P.

    1995-01-01

    Flow induced vibrations of reactor pressure vessel (RPV) internals (control element and core barrel motions) at VVER-440 reactors have led to the development of dedicated methods for on-line monitoring. These methods need a certain developed stage of the faults to be detected. To achieve a real sensitive early detection of mechanical faults of RPV internals, a theoretical vibration model was developed based on finite elements. The model comprises the whole primary circuit including the steam generators (SG). By means of that model all eigenfrequencies up to 30 Hz and the corresponding mode shapes were calculated for the normal vibration behaviour. Moreover the shift of eigenfrequencies and of amplitudes due to the degradation or to the failure of internal clamping and spring elements could be investigated, showing that a recognition of such degradations even inside the RPV is possible by pure excore vibration measurements. A true diagnostic, that is the identification of the failed component, might become possible because different faults influence different and well separated eigenfrequencies. (author)

  20. BWR Refill-Reflood Program, Task 4.7 - model development: TRAC-BWR component models

    International Nuclear Information System (INIS)

    Cheung, Y.K.; Parameswaran, V.; Shaug, J.C.

    1983-09-01

    TRAC (Transient Reactor Analysis Code) is a computer code for best-estimate analysis for the thermal hydraulic conditions in a reactor system. The development and assessment of the BWR component models developed under the Refill/Reflood Program that are necessary to structure a BWR-version of TRAC are described in this report. These component models are the jet pump, steam separator, steam dryer, two-phase level tracking model, and upper-plenum mixing model. These models have been implemented into TRAC-B02. Also a single-channel option has been developed for individual fuel-channel analysis following a system-response calculation

  1. Stochastic Models of Defects in Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2013-01-01

    The drivetrain in a wind turbine nacelle typically consists of a variety of heavily loaded components, like the main shaft, bearings, gearbox and generator. The variations in environmental load challenge the performance of all the components of the drivetrain. Failure of each of these components...

  2. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs

    Science.gov (United States)

    Yahyanejad, Saeed; Rinner, Bernhard

    2015-06-01

    The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.

  3. Exploring a minimal two-component p53 model

    International Nuclear Information System (INIS)

    Sun, Tingzhe; Zhu, Feng; Shen, Pingping; Yuan, Ruoshi; Xu, Wei

    2010-01-01

    The tumor suppressor p53 coordinates many attributes of cellular processes via interlocked feedback loops. To understand the biological implications of feedback loops in a p53 system, a two-component model which encompasses essential feedback loops was constructed and further explored. Diverse bifurcation properties, such as bistability and oscillation, emerge by manipulating the feedback strength. The p53-mediated MDM2 induction dictates the bifurcation patterns. We first identified irradiation dichotomy in p53 models and further proposed that bistability and oscillation can behave in a coordinated manner. Further sensitivity analysis revealed that p53 basal production and MDM2-mediated p53 degradation, which are central to cellular control, are most sensitive processes. Also, we identified that the much more significant variations in amplitude of p53 pulses observed in experiments can be derived from overall amplitude parameter sensitivity. The combined approach with bifurcation analysis, stochastic simulation and sampling-based sensitivity analysis not only gives crucial insights into the dynamics of the p53 system, but also creates a fertile ground for understanding the regulatory patterns of other biological networks

  4. Modeling and validation of existing VAV system components

    Energy Technology Data Exchange (ETDEWEB)

    Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada)

    2004-07-01

    The optimization of supervisory control strategies and local-loop controllers can improve the performance of HVAC (heating, ventilating, air-conditioning) systems. In this study, the component model of the fan, the damper and the cooling coil were developed and validated against monitored data of an existing variable air volume (VAV) system installed at Montreal's Ecole de Technologie Superieure. The measured variables that influence energy use in individual HVAC models included: (1) outdoor and return air temperature and relative humidity, (2) supply air and water temperatures, (3) zone airflow rates, (4) supply duct, outlet fan, mixing plenum static pressures, (5) fan speed, and (6) minimum and principal damper and cooling and heating coil valve positions. The additional variables that were considered, but not measured were: (1) fan and outdoor airflow rate, (2) inlet and outlet cooling coil relative humidity, and (3) liquid flow rate through the heating or cooling coils. The paper demonstrates the challenges of the validation process when monitored data of existing VAV systems are used. 7 refs., 11 figs.

  5. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  6. Connected Component Model for Multi-Object Tracking.

    Science.gov (United States)

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  7. Two-component network model in voice identification technologies

    Directory of Open Access Journals (Sweden)

    Edita K. Kuular

    2018-03-01

    Full Text Available Among the most important parameters of biometric systems with voice modalities that determine their effectiveness, along with reliability and noise immunity, a speed of identification and verification of a person has been accentuated. This parameter is especially sensitive while processing large-scale voice databases in real time regime. Many research studies in this area are aimed at developing new and improving existing algorithms for presentation and processing voice records to ensure high performance of voice biometric systems. Here, it seems promising to apply a modern approach, which is based on complex network platform for solving complex massive problems with a large number of elements and taking into account their interrelationships. Thus, there are known some works which while solving problems of analysis and recognition of faces from photographs, transform images into complex networks for their subsequent processing by standard techniques. One of the first applications of complex networks to sound series (musical and speech analysis are description of frequency characteristics by constructing network models - converting the series into networks. On the network ontology platform a previously proposed technique of audio information representation aimed on its automatic analysis and speaker recognition has been developed. This implies converting information into the form of associative semantic (cognitive network structure with amplitude and frequency components both. Two speaker exemplars have been recorded and transformed into pertinent networks with consequent comparison of their topological metrics. The set of topological metrics for each of network models (amplitude and frequency one is a vector, and together  those combine a matrix, as a digital "network" voiceprint. The proposed network approach, with its sensitivity to personal conditions-physiological, psychological, emotional, might be useful not only for person identification

  8. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  9. Pilot study on the effects of a 2-week hiking vacation at moderate versus low altitude on plasma parameters of carbohydrate and lipid metabolism in patients with metabolic syndrome.

    Science.gov (United States)

    Gutwenger, Ivana; Hofer, Georg; Gutwenger, Anna K; Sandri, Marco; Wiedermann, Christian J

    2015-03-28

    Hypoxic and hypobaric conditions may augment the beneficial influence of training on cardiovascular risk factors. This pilot study aimed to explore for effects of a two-week hiking vacation at moderate versus low altitude on adipokines and parameters of carbohydrate and lipid metabolism in patients with metabolic syndrome. Fourteen subjects (mean age: 55.8 years, range: 39 - 69) with metabolic syndrome participated in a 2-week structured training program (3 hours of guided daily hiking 4 times a week, training intensity at 55-65% of individual maximal heart rate; total training time, 24 hours). Participants were divided for residence and training into two groups, one at moderate altitude (1,900 m; n = 8), and the other at low altitude (300 m; n = 6). Anthropometric, cardiovascular and metabolic parameters were measured before and after the training period. In study participants, training overall reduced circulating levels of total cholesterol (p = 0.024), low-density lipoprotein cholesterol (p = 0.025) and adiponectin (p triglycerides (p = 0.025) and leptin (p = 0.015), whereas in the low altitude group (n = 6), none of the lipid parameters was significantly changed (each p > 0.05). Hiking-induced relative changes of triglyceride levels were positively associated with reductions in leptin levels (p = 0.006). As compared to 300 m altitude, training at 1,900 m showed borderline significant differences in the pre-post mean reduction rates of triglyceride (p = 0.050) and leptin levels (p = 0.093). Preliminary data on patients with metabolic syndrome suggest that a 2-week hiking vacation at moderate altitude may be more beneficial for adipokines and parameters of lipid metabolism than training at low altitude. In order to draw firm conclusions regarding better corrections of dyslipidemia and metabolic syndrome by physical exercise under mild hypobaric and hypoxic conditions, a sufficiently powered randomized clinical trial appears warranted. ClinicalTrials.gov ID NCT

  10. An ontology for component-based models of water resource systems

    Science.gov (United States)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  11. A probabilistic model for component-based shape synthesis

    KAUST Repository

    Kalogerakis, Evangelos; Chaudhuri, Siddhartha; Koller, Daphne; Koltun, Vladlen

    2012-01-01

    represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation

  12. Measurement and Modelling of MIC Components Using Conductive Lithographic Films

    OpenAIRE

    Shepherd, P. R.; Taylor, C.; Evans l, P. S. A.; Harrison, D. J.

    2001-01-01

    Conductive Lithographic Films (CLFs) have previously demonstrated useful properties in printed mi-crowave circuits, combining low cost with high speed of manufacture. In this paper we examine the formation of various passive components via the CLF process, which enables further integration of printed microwave integrated circuits. The printed components include vias, resistors and overlay capacitors, and offer viable alternatives to traditional manufacturing processes for Microwave Inte-grate...

  13. Design and Application of an Ontology for Component-Based Modeling of Water Systems

    Science.gov (United States)

    Elag, M.; Goodall, J. L.

    2012-12-01

    Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.

  14. Ecological, psychological, and cognitive components of reading difficulties: testing the component model of reading in fourth graders across 38 countries.

    Science.gov (United States)

    Chiu, Ming Ming; McBride-Chang, Catherine; Lin, Dan

    2012-01-01

    The authors tested the component model of reading (CMR) among 186,725 fourth grade students from 38 countries (45 regions) on five continents by analyzing the 2006 Progress in International Reading Literacy Study data using measures of ecological (country, family, school, teacher), psychological, and cognitive components. More than 91% of the differences in student difficulty occurred at the country (61%) and classroom (30%) levels (ecological), with less than 9% at the student level (cognitive and psychological). All three components were negatively associated with reading difficulties: cognitive (student's early literacy skills), ecological (family characteristics [socioeconomic status, number of books at home, and attitudes about reading], school characteristics [school climate and resources]), and psychological (students' attitudes about reading, reading self-concept, and being a girl). These results extend the CMR by demonstrating the importance of multiple levels of factors for reading deficits across diverse cultures.

  15. Building Component Library: An Online Repository to Facilitate Building Energy Model Creation; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Fleming, K.; Long, N.; Swindler, A.

    2012-05-01

    This paper describes the Building Component Library (BCL), the U.S. Department of Energy's (DOE) online repository of building components that can be directly used to create energy models. This comprehensive, searchable library consists of components and measures as well as the metadata which describes them. The library is also designed to allow contributors to easily add new components, providing a continuously growing, standardized list of components for users to draw upon.

  16. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S

    2015-02-25

    The previously suggested quasi-discrete model for heating and evaporation of complex multi-component hydrocarbon fuel droplets is described. The dependence of density, viscosity, heat capacity and thermal conductivity of liquid components on carbon numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model. This model is applied to the analysis of Diesel and gasoline fuel droplet heating and evaporation. The components with relatively close n are replaced by quasi-components with properties calculated as average properties of the a priori defined groups of actual components. Thus the analysis of the heating and evaporation of droplets consisting of many components is replaced with the analysis of the heating and evaporation of droplets consisting of relatively few quasi-components. It is demonstrated that for Diesel and gasoline fuel droplets the predictions of the model based on five quasi-components are almost indistinguishable from the predictions of the model based on twenty quasi-components for Diesel fuel droplets and are very close to the predictions of the model based on thirteen quasi-components for gasoline fuel droplets. It is recommended that in the cases of both Diesel and gasoline spray combustion modelling, the analysis of droplet heating and evaporation is based on as little as five quasi-components.

  17. MODELING OF SYSTEM COMPONENTS OF EDUCATIONAL PROGRAMS IN HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    E. K. Samerkhanova

    2016-01-01

    Full Text Available Based on the principles of System Studies, describes the components of the educational programs of the control system. Educational Program Management is a set of substantive, procedural, resource, subject-activity, efficiently and evaluation components, which ensures the integrity of integration processes at all levels of education. Ensuring stability and development in the management of educational programs is achieved by identifying and securing social norms, the status of the educational institution program managers to ensure the achievement of modern quality of education.Content Management provides the relevant educational content in accordance with the requirements of the educational and professional standards; process control ensures the efficient organization of rational distribution process flows; Resource Management provides optimal distribution of personnel, information and methodological, material and technical equipment of the educational program; contingent management provides subject-activity interaction of participants of the educational process; quality control ensures the quality of educational services.

  18. Implementing components of the routines-based model

    OpenAIRE

    McWilliam, Robin; Fernández Valero, Rosa

    2015-01-01

    The MBR is comprised of 17 components that can generally be grouped into practices related to (a) functional assessment and intervention planning (for example, Routines-Based Interview), (b) organization of services (including location and staffing), (c) service delivery to children and families (using a consultative approach with families and teachers, integrated therapy), (d) classroom organization (for example, classroom zones), and (e) supervision and training through ch...

  19. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, T.J.; Long, K.S.; Sayre, J.A. [Sandia National Labs., Albuquerque, NM (United States); Hull, A.L. [Sandia National Labs., Livermore, CA (United States); Carey, D.A.; Sim, J.R.; Smith, M.G. [Allied-Signal Aerospace Co., Kansas City, MO (United States). Kansas City Div.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  20. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  1. Exploring component-based approaches in forest landscape modeling

    Science.gov (United States)

    H. S. He; D. R. Larsen; D. J. Mladenoff

    2002-01-01

    Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...

  2. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-17

    motor speed can be either positive or negative dependent upon the propelling or regenerative braking scenario. The simulation provides three...the machine during generation or regenerative braking . To use the model, the user modifies the motor model criteria parameters by double-clicking... SYSTEMS ENGINEERING AND TECHNOLOGY SYMPOSIUM MODELING & SIMULATION, TESTING AND VALIDATION (MSTV) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN

  3. Modeling dynamics of biological and chemical components of aquatic ecosystems

    International Nuclear Information System (INIS)

    Lassiter, R.R.

    1975-05-01

    To provide capability to model aquatic ecosystems or their subsystems as needed for particular research goals, a modeling strategy was developed. Submodels of several processes common to aquatic ecosystems were developed or adapted from previously existing ones. Included are submodels for photosynthesis as a function of light and depth, biological growth rates as a function of temperature, dynamic chemical equilibrium, feeding and growth, and various types of losses to biological populations. These submodels may be used as modules in the construction of models of subsystems or ecosystems. A preliminary model for the nitrogen cycle subsystem was developed using the modeling strategy and applicable submodels. (U.S.)

  4. A three-component, hierarchical model of executive attention

    OpenAIRE

    Whittle, Sarah; Pantelis, Christos; Testa, Renee; Tiego, Jeggan; Bellgrove, Mark

    2017-01-01

    Executive attention refers to the goal-directed control of attention. Existing models of executive attention distinguish between three correlated, but empirically dissociable, factors related to selectively attending to task-relevant stimuli (Selective Attention), inhibiting task-irrelevant responses (Response Inhibition), and actively maintaining goal-relevant information (Working Memory Capacity). In these models, Selective Attention and Response Inhibition are moderately strongly correlate...

  5. Economic Modeling as a Component of Academic Strategic Planning.

    Science.gov (United States)

    MacKinnon, Joyce; Sothmann, Mark; Johnson, James

    2001-01-01

    Computer-based economic modeling was used to enable a school of allied health to define outcomes, identify associated costs, develop cost and revenue models, and create a financial planning system. As a strategic planning tool, it assisted realistic budgeting and improved efficiency and effectiveness. (Contains 18 references.) (SK)

  6. Component vibration of VVER-reactors - diagnostics and modelling

    International Nuclear Information System (INIS)

    Altstadt, E.; Scheffler, M.; Weiss, F.P.

    1994-01-01

    The model comprises the whole primary circuit, including steam generators, loops, coolant pumps, main isolating valves and certainly the reactor pressure vessel and its internals. It was developed using the finite-element-code ANSYS. The model has a modular structure, so that various operational and assembling states can easily be considered. (orig./DG)

  7. PyCatch: Component based hydrological catchment modelling

    NARCIS (Netherlands)

    Lana-Renault, N.; Karssenberg, D.J.

    2013-01-01

    Dynamic numerical models are powerful tools for representing and studying environmental processes through time. Usually they are constructed with environmental modelling languages, which are high-level programming languages that operate at the level of thinking of the scientists. In this paper we

  8. Repeat, Low Altitude Measurements of Vegetation Status and Biomass Using Manned Aerial and UAS Imagery in a Piñon-Juniper Woodland

    Science.gov (United States)

    Krofcheck, D. J.; Lippitt, C.; Loerch, A.; Litvak, M. E.

    2015-12-01

    Measuring the above ground biomass of vegetation is a critical component of any ecological monitoring campaign. Traditionally, biomass of vegetation was measured with allometric-based approach. However, it is also time-consuming, labor-intensive, and extremely expensive to conduct over large scales and consequently is cost-prohibitive at the landscape scale. Furthermore, in semi-arid ecosystems characterized by vegetation with inconsistent growth morphologies (e.g., piñon-juniper woodlands), even ground-based conventional allometric approaches are often challenging to execute consistently across individuals and through time, increasing the difficulty of the required measurements and consequently the accuracy of the resulting products. To constrain the uncertainty associated with these campaigns, and to expand the extent of our measurement capability, we made repeat measurements of vegetation biomass in a semi-arid piñon-juniper woodland using structure-from-motion (SfM) techniques. We used high-spatial resolution overlapping aerial images and high-accuracy ground control points collected from both manned aircraft and multi-rotor UAS platforms, to generate digital surface model (DSM) for our experimental region. We extracted high-precision canopy volumes from the DSM and compared these to the vegetation allometric data, s to generate high precision canopy volume models. We used these models to predict the drivers of allometric equations for Pinus edulis and Juniperous monosperma (canopy height, diameter at breast height, and root collar diameter). Using this approach, we successfully accounted for the carbon stocks in standing live and standing dead vegetation across a 9 ha region, which contained 12.6 Mg / ha of standing dead biomass, with good agreement to our field plots. Here we present the initial results from an object oriented workflow which aims to automate the biomass estimation process of tree crown delineation and volume calculation, and partition

  9. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  10. Multiparticle production in a two-component dual parton model

    International Nuclear Information System (INIS)

    Aurenche, P.; Bopp, F.W.; Capella, A.; Kwiecinski, J.; Maire, M.; Ranft, J.; Tran Thanh Van, J.

    1992-01-01

    The dual parton model (DPM) describes soft and semihard multiparticle production. The version of the DPM presented in this paper includes soft and hard mechanisms as well as diffractive processes. The model is formulated as a Monte Carlo event generator. We calculate in this model, in the energy range of the hadron colliders, rapidity distributions and the rise of the rapidity plateau with the collision energy, transverse-momentum distributions and the rise of average transverse momenta with the collision energy, multiplicity distributions in different pseudorapidity regions, and transverse-energy distributions. For most of these quantities we find a reasonable agreement with experimental data

  11. Comprehensive FDTD modelling of photonic crystal waveguide components

    DEFF Research Database (Denmark)

    Lavrinenko, Andrei; Borel, Peter Ingo; Frandsen, Lars Hagedorn

    2004-01-01

    Planar photonic crystal waveguide structures have been modelled using the finite-difference-time-domain method and perfectly matched layers have been employed as boundary conditions. Comprehensive numerical calculations have been performed and compared to experimentally obtained transmission...

  12. New methods for the characterization of pyrocarbon; The two component model of pyrocarbon

    Energy Technology Data Exchange (ETDEWEB)

    Luhleich, H.; Sutterlin, L.; Hoven, H.; Nickel, H.

    1972-04-19

    In the first part, new experiments to clarify the origin of different pyrocarbon components are described. Three new methods (plasma-oxidation, wet-oxidation, ultrasonic method) are presented to expose the carbon black like component in the pyrocarbon deposited in fluidized beds. In the second part, a two component model of pyrocarbon is proposed and illustrated by examples.

  13. System level modeling and component level control of fuel cells

    Science.gov (United States)

    Xue, Xingjian

    This dissertation investigates the fuel cell systems and the related technologies in three aspects: (1) system-level dynamic modeling of both PEM fuel cell (PEMFC) and solid oxide fuel cell (SOFC); (2) condition monitoring scheme development of PEM fuel cell system using model-based statistical method; and (3) strategy and algorithm development of precision control with potential application in energy systems. The dissertation first presents a system level dynamic modeling strategy for PEM fuel cells. It is well known that water plays a critical role in PEM fuel cell operations. It makes the membrane function appropriately and improves the durability. The low temperature operating conditions, however, impose modeling difficulties in characterizing the liquid-vapor two phase change phenomenon, which becomes even more complex under dynamic operating conditions. This dissertation proposes an innovative method to characterize this phenomenon, and builds a comprehensive model for PEM fuel cell at the system level. The model features the complete characterization of multi-physics dynamic coupling effects with the inclusion of dynamic phase change. The model is validated using Ballard stack experimental result from open literature. The system behavior and the internal coupling effects are also investigated using this model under various operating conditions. Anode-supported tubular SOFC is also investigated in the dissertation. While the Nernst potential plays a central role in characterizing the electrochemical performance, the traditional Nernst equation may lead to incorrect analysis results under dynamic operating conditions due to the current reverse flow phenomenon. This dissertation presents a systematic study in this regard to incorporate a modified Nernst potential expression and the heat/mass transfer into the analysis. The model is used to investigate the limitations and optimal results of various operating conditions; it can also be utilized to perform the

  14. The dynamic cusp at low altitudes: a case study utilizing Viking, DMSP-F7, and Sondrestrom incoherent scatter radar observations

    Directory of Open Access Journals (Sweden)

    J. Watermann

    Full Text Available Coincident multi-instrument magnetospheric and ionospheric observations have made it possible to determine the position of the ionospheric footprint of the magnetospheric cusp and to monitor its evolution over time. The data used include charged particle and magnetic field measurements from the Earth-orbiting Viking and DMSP-F7 satellites, electric field measurements from Viking, interplanetary magnetic field and plasma data from IMP-8, and Sondrestrom incoherent scatter radar observations of the ionospheric plasma density, temperature, and convection. Viking detected cusp precipitation poleward of 75.5° invariant latitude. The ionospheric response to the observed electron precipitation was simulated using an auroral model. It predicts enhanced plasma density and elevated electron temperature in the upper E- and F-regions. Sondrestrom radar observations are in agreement with the predictions. The radar detected a cusp signature on each of five consecutive antenna elevation scans covering 1.2 h local time. The cusp appeared to be about 2° invariant latitude wide, and its ionospheric footprint shifted equatorward by nearly 2° during this time, possibly influenced by an overall decrease in the IMF Bz component. The radar plasma drift data and the Viking magnetic and electric field data suggest that the cusp was associated with a continuous, rather than a patchy, merging between the IMF and the geomagnetic field.

  15. Validation of a blood marker for plasma volume in endurance athletes during a live-high train-low altitude training camp.

    Science.gov (United States)

    Lobigs, Louisa M; Garvican-Lewis, Laura A; Vuong, Victor L; Tee, Nicolin; Gore, Christopher J; Peeling, Peter; Dawson, Brian; Schumacher, Yorck O

    2018-02-19

    Altitude is a confounding factor within the Athlete Biological Passport (ABP) due, in part, to the plasma volume (PV) response to hypoxia. Here, a newly developed PV blood test is applied to assess the possible efficacy of reducing the influence of PV on the volumetric ABP markers; haemoglobin concentration ([Hb]) and the OFF-score. Endurance athletes (n=34) completed a 21-night simulated live-high train-low (LHTL) protocol (14 h.d -1 at 3000 m). Bloods were collected twice pre-altitude; at days 3, 8, and 15 at altitude; and 1, 7, 21, and 42 days post-altitude. A full blood count was performed on the whole blood sample. Serum was analysed for transferrin, albumin, calcium, creatinine, total protein, and low-density lipoprotein. The PV blood test (consisting of the serum markers, [Hb] and platelets) was applied to the ABP adaptive model and new reference predictions were calculated for [Hb] and the OFF-score, thereby reducing the PV variance component. The PV correction refined the ABP reference predictions. The number of atypical passport findings (ATPFs) for [Hb] was reduced from 7 of 5 subjects to 6 of 3 subjects. The OFF-score ATPFs increased with the PV correction (from 9 to 13, 99% specificity); most likely the result of more specific reference limit predictions combined with the altitude-induced increase in red cell production. Importantly, all abnormal biomarker values were identified by a low confidence value. Although the multifaceted, individual physiological response to altitude confounded some results, the PV model appears capable of reducing the impact of PV fluctuations on [Hb]. Copyright © 2018 John Wiley & Sons, Ltd.

  16. A Bayesian Analysis of Unobserved Component Models Using Ox

    Directory of Open Access Journals (Sweden)

    Charles S. Bos

    2011-05-01

    Full Text Available This article details a Bayesian analysis of the Nile river flow data, using a similar state space model as other articles in this volume. For this data set, Metropolis-Hastings and Gibbs sampling algorithms are implemented in the programming language Ox. These Markov chain Monte Carlo methods only provide output conditioned upon the full data set. For filtered output, conditioning only on past observations, the particle filter is introduced. The sampling methods are flexible, and this advantage is used to extend the model to incorporate a stochastic volatility process. The volatility changes both in the Nile data and also in daily S&P 500 return data are investigated. The posterior density of parameters and states is found to provide information on which elements of the model are easily identifiable, and which elements are estimated with less precision.

  17. Mass models for disk and halo components in spiral galaxies

    International Nuclear Information System (INIS)

    Athanassoula, E.; Bosma, A.

    1987-01-01

    The mass distribution in spiral galaxies is investigated by means of numerical simulations, summarizing the results reported by Athanassoula et al. (1986). Details of the modeling technique employed are given, including bulge-disk decomposition; computation of bulge and disk rotation curves (assuming constant mass/light ratios for each); and determination (for spherical symmetry) of the total halo mass out to the optical radius, the concentration indices, the halo-density power law, the core radius, the central density, and the velocity dispersion. Also discussed are the procedures for incorporating galactic gas and checking the spiral structure extent. It is found that structural constraints limit disk mass/light ratios to a range of 0.3 dex, and that the most likely models are maximum-disk models with m = 1 disturbances inhibited. 19 references

  18. Modeling of a remote inspection system for NSSS components

    International Nuclear Information System (INIS)

    Choi, Yoo Rark; Kim, Jae Hee; Lee, Jae Cheol

    2003-03-01

    Safety inspection for safety-critical unit of nuclear power plant has been processed using off-line technology. Thus we can not access safety inspection system and inspection data via network such as internet. We are making an on-line control and data access system based on WWW and JAVA technologies which can be used during plant operation to overcome these problems. Users can access inspection systems and inspection data only using web-browser. This report discusses about analysis of the existing remote system and essential techniques such as Web, JAVA, client/server model, and multi-tier model. This report also discusses about a system modeling that we have been developed using these techniques and provides solutions for developing an on-line control and data access system

  19. Three-Component Dust Models for Interstellar Extinction C ...

    Indian Academy of Sciences (India)

    without standard' method were used to constrain the dust characteristics in the mean ISM (RV = 3.1), ... Interstellar dust models have evolved as the observational data have advanced, and the most popular dust ... distribution comes from the IRAS observation which shows an excess of 12 μ and. 25 μ emission from the ISM ...

  20. Soil Structure - A Neglected Component of Land-Surface Models

    Science.gov (United States)

    Fatichi, S.; Or, D.; Walko, R. L.; Vereecken, H.; Kollet, S. J.; Young, M.; Ghezzehei, T. A.; Hengl, T.; Agam, N.; Avissar, R.

    2017-12-01

    Soil structure is largely absent in most standard sampling and measurements and in the subsequent parameterization of soil hydraulic properties deduced from soil maps and used in Earth System Models. The apparent omission propagates into the pedotransfer functions that deduce parameters of soil hydraulic properties primarily from soil textural information. Such simple parameterization is an essential ingredient in the practical application of any land surface model. Despite the critical role of soil structure (biopores formed by decaying roots, aggregates, etc.) in defining soil hydraulic functions, only a few studies have attempted to incorporate soil structure into models. They mostly looked at the effects on preferential flow and solute transport pathways at the soil profile scale; yet, the role of soil structure in mediating large-scale fluxes remains understudied. Here, we focus on rectifying this gap and demonstrating potential impacts on surface and subsurface fluxes and system wide eco-hydrologic responses. The study proposes a systematic way for correcting the soil water retention and hydraulic conductivity functions—accounting for soil-structure—with major implications for near saturated hydraulic conductivity. Modification to the basic soil hydraulic parameterization is assumed as a function of biological activity summarized by Gross Primary Production. A land-surface model with dynamic vegetation is used to carry out numerical simulations with and without the role of soil-structure for 20 locations characterized by different climates and biomes across the globe. Including soil structure affects considerably the partition between infiltration and runoff and consequently leakage at the base of the soil profile (recharge). In several locations characterized by wet climates, a few hundreds of mm per year of surface runoff become deep-recharge accounting for soil-structure. Changes in energy fluxes, total evapotranspiration and vegetation productivity

  1. Feedback loops and temporal misalignment in component-based hydrologic modeling

    Science.gov (United States)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  2. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  3. Research on development model of nuclear component based on life cycle management

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    At present the development process of nuclear component, even nuclear component itself, is more and more supported by computer technology. This increasing utilization of the computer and software has led to the faster development of nuclear technology on one hand and also brought new problems on the other hand. Especially, the combination of hardware, software and humans has increased nuclear component system complexities to an unprecedented level. To solve this problem, Life Cycle Management technology is adopted in nuclear component system. Hence, an intensive discussion on the development process of a nuclear component is proposed. According to the characteristics of the nuclear component development, such as the complexities and strict safety requirements of the nuclear components, long-term design period, changeable design specifications and requirements, high capital investment, and satisfaction for engineering codes/standards, the development life-cycle model of nuclear component is presented. The development life-cycle model is classified at three levels, namely, component level development life-cycle, sub-component development life-cycle and component level verification/certification life-cycle. The purposes and outcomes of development processes are stated in detailed. A process framework for nuclear component based on system engineering and development environment of nuclear component is discussed for future research work. (authors)

  4. Five-component propagation model for steam explosion analysis

    International Nuclear Information System (INIS)

    Yang, Y.; Moriyama, Kiyofumi; Park, H.S.; Maruyama, Yu; Sugimoto, Jun

    1999-01-01

    A five-field simulation code JASMINE-pro has been developed at JAERI for the calculation of the propagation and explosion phase of steam explosions. The basic equations and the constitutive relationships specifically utilized in the propagation models in the code are introduced in this paper. Some calculations simulating the KROTOS 1D and 2D steam explosion experiments are also stated in the paper to show the present capability of the code. (author)

  5. Component-oriented approach to the development and use of numerical models in high energy physics

    International Nuclear Information System (INIS)

    Amelin, N.S.; Komogorov, M.Eh.

    2002-01-01

    We discuss the main concepts of a component approach to the development and use of numerical models in high energy physics. This approach is realized as the NiMax software system. The discussed concepts are illustrated by numerous examples of the system user session. In appendix chapter we describe physics and numerical algorithms of the model components to perform simulation of hadronic and nuclear collisions at high energies. These components are members of hadronic application modules that have been developed with the help of the NiMax system. Given report is served as an early release of the NiMax manual mainly for model component users

  6. Mathematical Model for Multicomponent Adsorption Equilibria Using Only Pure Component Data

    DEFF Research Database (Denmark)

    Marcussen, Lis

    2000-01-01

    A mathematical model for nonideal adsorption equilibria in multicomponent mixtures is developed. It is applied with good results for pure substances and for prediction of strongly nonideal multicomponent equilibria using only pure component data. The model accounts for adsorbent...

  7. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Carl Stern; Martin Lee

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  8. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    CERN Document Server

    Carl-Stern

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.

  9. Three Fundamental Components of the Autopoiesic Leadership Model

    Directory of Open Access Journals (Sweden)

    Mateja Kalan

    2017-06-01

    Full Text Available Research Question (RQ: What type of leadership could be developed upon transformational leadership? Purpose: The purpose of the research was to create a new leadership style. Its variables can be further developed upon transformational leadership variables. Namely, this leadership style is known as a successful leadership style in successful organisations. Method: In the research of published papers from scientific databases, we relied on the triangulation of theories. To clarify the research question, we have researched different authors, who based their research papers on different hypotheses. In some articles, hypotheses were even contradictory. Results: Through the research, we have concluded that authors often changed certain variables when researching the topic of transformational leadership. We have correlated these variables and developed a new model, naming it autopoiesic leadership. Its main variables are (1 goal orientation, (2 emotional sensitivity, and (3 manager’s flexibility in organisations. Organisation: Our research can have a positive effect on managers in terms of recognising the importance of selected variables. Practical application of autopoiesic leadership can imply more efficiency in business processes of a company, increasing its financial performance. Society: Autopoiesic leadership is a leadership style that largely influences the use of the individual’s internal resources. Thus, she or he becomes internally motivated, and this is the basis for quality work. This strengthens employees’ social aspect which consequently also has a positive effect on their life outside the organisational system, i.e. their family and broader living environment. Originality: In the worldwide literature, we have noticed the concept autopoiesis in papers about management subjects, but the autopoiesic leadership model has not been developed so far. Limitations / Future Research: We based our research on the triangulation of theories

  10. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    Jan 5, 2016 ... We propose a two-component dark matter (DM) model, each component of which is a real singlet scalar, to explain results from both direct and indirect detection experiments. We put the constraints on the model parameters from theoretical bounds, PLANCK relic density results and direct DM experiments.

  11. Modelling insights on the partition of evapotranspiration components across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-04-01

    Recent studies using various methodologies have found a large variability (from 35 to 90%) in the ratio of transpiration to total evapotranspiration (denoted as T:ET) across biomes or even at the global scale. Concurrently, previous results suggest that T:ET is independent of mean precipitation and has a positive correlation with Leaf Area Index (LAI). We used the mechanistic ecohydrological model, T&C, with a refined process-based description of soil resistance and a detailed treatment of canopy biophysics and ecophysiology, to investigate T:ET across multiple biomes. Contrary to observation-based estimates, simulation results highlight a well-constrained range of mean T:ET across biomes that is also robust to perturbations of the most sensitive parameters. Simulated T:ET was confirmed to be independent of average precipitation, while it was found to be uncorrelated with LAI across biomes. Higher values of LAI increase evaporation from interception but suppress ground evaporation with the two effects largely cancelling each other in many sites. These results offer mechanistic, model-based, evidence to the ongoing research about the range of T:ET and the factors affecting its magnitude across biomes.

  12. Virtual Models Linked with Physical Components in Construction

    DEFF Research Database (Denmark)

    Sørensen, Kristian Birch

    The use of virtual models supports a fundamental change in the working practice of the construction industry. It changes the primary information carrier (drawings) from simple manually created depictions of the building under construction to visually realistic digital representations that also...... engineering and business development in an iterative and user needs centred system development process. The analysis of future business perspectives presents an extensive number of new working processes that can assist in solving major challenges in the construction industry. Three of the most promising...... practices and development of new ontologies. Based on the experiences gained in this PhD project, some of the important future challenges are also to show the benefits of using modern information and communication technology to practitioners in the construction industry and to communicate this knowledge...

  13. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  14. The Component Slope Linear Model for Calculating Intensive Partial Molar Properties: Application to Waste Glasses

    International Nuclear Information System (INIS)

    Reynolds, Jacob G.

    2013-01-01

    Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a change in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH 4 H 2 O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H 2 O, NaOH, and NaAl(OH) 4 are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components

  15. Modelling the effect of mixture components on permeation through skin.

    Science.gov (United States)

    Ghafourian, T; Samaras, E G; Brooks, J D; Riviere, J E

    2010-10-15

    A vehicle influences the concentration of penetrant within the membrane, affecting its diffusivity in the skin and rate of transport. Despite the huge amount of effort made for the understanding and modelling of the skin absorption of chemicals, a reliable estimation of the skin penetration potential from formulations remains a challenging objective. In this investigation, quantitative structure-activity relationship (QSAR) was employed to relate the skin permeation of compounds to the chemical properties of the mixture ingredients and the molecular structures of the penetrants. The skin permeability dataset consisted of permeability coefficients of 12 different penetrants each blended in 24 different solvent mixtures measured from finite-dose diffusion cell studies using porcine skin. Stepwise regression analysis resulted in a QSAR employing two penetrant descriptors and one solvent property. The penetrant descriptors were octanol/water partition coefficient, logP and the ninth order path molecular connectivity index, and the solvent property was the difference between boiling and melting points. The negative relationship between skin permeability coefficient and logP was attributed to the fact that most of the drugs in this particular dataset are extremely lipophilic in comparison with the compounds in the common skin permeability datasets used in QSAR. The findings show that compounds formulated in vehicles with small boiling and melting point gaps will be expected to have higher permeation through skin. The QSAR was validated internally, using a leave-many-out procedure, giving a mean absolute error of 0.396. The chemical space of the dataset was compared with that of the known skin permeability datasets and gaps were identified for future skin permeability measurements. Copyright 2010 Elsevier B.V. All rights reserved.

  16. A review of typical thermal fatigue failure models for solder joints of electronic components

    Science.gov (United States)

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  17. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  18. Coordinated Cluster, ground-based instrumentation and low-altitude satellite observations of transient poleward-moving events in the ionosphere and in the tail lobe

    Directory of Open Access Journals (Sweden)

    M. Lockwood

    Full Text Available During the interval between 8:00–9:30 on 14 January 2001, the four Cluster spacecraft were moving from the central magnetospheric lobe, through the dusk sector mantle, on their way towards intersecting the magnetopause near 15:00 MLT and 15:00 UT. Throughout this interval, the EISCAT Svalbard Radar (ESR at Longyearbyen observed a series of poleward-moving transient events of enhanced F-region plasma concentration ("polar cap patches", with a repetition period of the order of 10 min. Allowing for the estimated solar wind propagation delay of 75 ( ± 5 min, the interplanetary magnetic field (IMF had a southward component during most of the interval. The magnetic footprint of the Cluster spacecraft, mapped to the ionosphere using the Tsyganenko T96 model (with input conditions prevailing during this event, was to the east of the ESR beams. Around 09:05 UT, the DMSP-F12 satellite flew over the ESR and showed a sawtooth cusp ion dispersion signature that also extended into the electrons on the equatorward edge of the cusp, revealing a pulsed magnetopause reconnection. The consequent enhanced ionospheric flow events were imaged by the SuperDARN HF backscatter radars. The average convection patterns (derived using the AMIE technique on data from the magnetometers, the EISCAT and SuperDARN radars, and the DMSP satellites show that the associated poleward-moving events also convected over the predicted footprint of the Cluster spacecraft. Cluster observed enhancements in the fluxes of both electrons and ions. These events were found to be essentially identical at all four spacecraft, indicating that they had a much larger spatial scale than the satellite separation of the order of 600 km. Some of the events show a correspondence between the lowest energy magnetosheath electrons detected by the PEACE instrument on Cluster (10–20 eV and the topside ionospheric enhancements seen by the ESR (at 400–700 km. We suggest that a potential barrier at the

  19. Coordinated Cluster, ground-based instrumentation and low-altitude satellite observations of transient poleward-moving events in the ionosphere and in the tail lobe

    Directory of Open Access Journals (Sweden)

    M. Lockwood

    2001-09-01

    Full Text Available During the interval between 8:00–9:30 on 14 January 2001, the four Cluster spacecraft were moving from the central magnetospheric lobe, through the dusk sector mantle, on their way towards intersecting the magnetopause near 15:00 MLT and 15:00 UT. Throughout this interval, the EISCAT Svalbard Radar (ESR at Longyearbyen observed a series of poleward-moving transient events of enhanced F-region plasma concentration ("polar cap patches", with a repetition period of the order of 10 min. Allowing for the estimated solar wind propagation delay of 75 ( ± 5 min, the interplanetary magnetic field (IMF had a southward component during most of the interval. The magnetic footprint of the Cluster spacecraft, mapped to the ionosphere using the Tsyganenko T96 model (with input conditions prevailing during this event, was to the east of the ESR beams. Around 09:05 UT, the DMSP-F12 satellite flew over the ESR and showed a sawtooth cusp ion dispersion signature that also extended into the electrons on the equatorward edge of the cusp, revealing a pulsed magnetopause reconnection. The consequent enhanced ionospheric flow events were imaged by the SuperDARN HF backscatter radars. The average convection patterns (derived using the AMIE technique on data from the magnetometers, the EISCAT and SuperDARN radars, and the DMSP satellites show that the associated poleward-moving events also convected over the predicted footprint of the Cluster spacecraft. Cluster observed enhancements in the fluxes of both electrons and ions. These events were found to be essentially identical at all four spacecraft, indicating that they had a much larger spatial scale than the satellite separation of the order of 600 km. Some of the events show a correspondence between the lowest energy magnetosheath electrons detected by the PEACE instrument on Cluster (10–20 eV and the topside ionospheric enhancements seen by the ESR (at 400–700 km. We suggest that a potential barrier at the

  20. Experiment planning using high-level component models at W7-X

    International Nuclear Information System (INIS)

    Lewerentz, Marc; Spring, Anett; Bluhm, Torsten; Heimann, Peter; Hennig, Christine; Kühner, Georg; Kroiss, Hugo; Krom, Johannes G.; Laqua, Heike; Maier, Josef; Riemann, Heike; Schacht, Jörg; Werner, Andreas; Zilker, Manfred

    2012-01-01

    Highlights: ► Introduction of models for an abstract description of fusion experiments. ► Component models support creating feasible experiment programs at planning time. ► Component models contain knowledge about physical and technical constraints. ► Generated views on models allow to present crucial information. - Abstract: The superconducting stellarator Wendelstein 7-X (W7-X) is a fusion device, which is capable of steady state operation. Furthermore W7-X is a very complex technical system. To cope with these requirements a modular and strongly hierarchical component-based control and data acquisition system has been designed. The behavior of W7-X is characterized by thousands of technical parameters of the participating components. The intended sequential change of those parameters during an experiment is defined in an experiment program. Planning such an experiment program is a crucial and complex task. To reduce the complexity an abstract, more physics-oriented high-level layer has been introduced earlier. The so-called high-level (physics) parameters are used to encapsulate technical details. This contribution will focus on the extension of this layer to a high-level component model. It completely describes the behavior of a component for a certain period of time. It allows not only defining simple value ranges but also complex dependencies between physics parameters. This can be: dependencies within components, dependencies between components or temporal dependencies. Component models can now be analyzed to generate various views of an experiment. A first implementation of such an analyze process is already finished. A graphical preview of a planned discharge can be generated from a chronological sequence of component models. This allows physicists to survey complex planned experiment programs at a glance.

  1. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  2. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    component dark matter model with real singlet scalars confronting GeV -ray excess from galactic centre and Fermi bubble. Debasish Majumdar Kamakshya Prasad Modak Subhendu Rakshit. Special: Cosmology Volume 86 Issue ...

  3. Model-Based Design Tools for Extending COTS Components To Extreme Environments, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation in this project is model-based design (MBD) tools for predicting the performance and useful life of commercial-off-the-shelf (COTS) components and...

  4. New approaches to the modelling of multi-component fuel droplet heating and evaporation

    KAUST Repository

    Sazhin, Sergei S; Elwardany, Ahmed E; Heikal, Morgan R

    2015-01-01

    numbers n and temperatures is taken into account. The effects of temperature gradient and quasi-component diffusion inside droplets are taken into account. The analysis is based on the Effective Thermal Conductivity/Effective Diffusivity (ETC/ED) model

  5. Multi-component fiber track modelling of diffusion-weighted magnetic resonance imaging data

    Directory of Open Access Journals (Sweden)

    Yasser M. Kadah

    2010-01-01

    Full Text Available In conventional diffusion tensor imaging (DTI based on magnetic resonance data, each voxel is assumed to contain a single component having diffusion properties that can be fully represented by a single tensor. Even though this assumption can be valid in some cases, the general case involves the mixing of components, resulting in significant deviation from the single tensor model. Hence, a strategy that allows the decomposition of data based on a mixture model has the potential of enhancing the diagnostic value of DTI. This project aims to work towards the development and experimental verification of a robust method for solving the problem of multi-component modelling of diffusion tensor imaging data. The new method demonstrates significant error reduction from the single-component model while maintaining practicality for clinical applications, obtaining more accurate Fiber tracking results.

  6. Detailed finite element method modeling of evaporating multi-component droplets

    Energy Technology Data Exchange (ETDEWEB)

    Diddens, Christian, E-mail: C.Diddens@tue.nl

    2017-07-01

    The evaporation of sessile multi-component droplets is modeled with an axisymmetic finite element method. The model comprises the coupled processes of mixture evaporation, multi-component flow with composition-dependent fluid properties and thermal effects. Based on representative examples of water–glycerol and water–ethanol droplets, regular and chaotic examples of solutal Marangoni flows are discussed. Furthermore, the relevance of the substrate thickness for the evaporative cooling of volatile binary mixture droplets is pointed out. It is shown how the evaporation of the more volatile component can drastically decrease the interface temperature, so that ambient vapor of the less volatile component condenses on the droplet. Finally, results of this model are compared with corresponding results of a lubrication theory model, showing that the application of lubrication theory can cause considerable errors even for moderate contact angles of 40°. - Graphical abstract:.

  7. Conservative modelling of the moisture and heat transfer in building components under atmospheric excitation

    DEFF Research Database (Denmark)

    Janssen, Hans; Blocken, Bert; Carmeliet, Jan

    2007-01-01

    While the transfer equations for moisture and heat in building components are currently undergoing standardisation, atmospheric boundary conditions, conservative modelling and numerical efficiency are not addressed. In a first part, this paper adds a comprehensive description of those boundary...

  8. A proposed centralised distribution model for the South African automotive component industry

    Directory of Open Access Journals (Sweden)

    Micheline J. Naude

    2009-12-01

    Full Text Available Purpose: This article explores the possibility of developing a distribution model, similar to the model developed and implemented by the South African pharmaceutical industry, which could be implemented by automotive component manufacturers for supply to independent retailers. Problem Investigated: The South African automotive components distribution chain is extensive with a number of players of varying sizes, from the larger spares distribution groups to a number of independent retailers. Distributing to the smaller independent retailers is costly for the automotive component manufacturers. Methodology: This study is based on a preliminary study of an explorative nature. Interviews were conducted with a senior staff member from a leading automotive component manufacturer in KwaZulu Natal and nine participants at a senior management level at five of their main customers (aftermarket retailers. Findings: The findings from the empirical study suggest that the aftermarket component industry is mature with the role players well established. The distribution chain to the independent retailer is expensive in terms of transaction and distribution costs for the automotive component manufacturer. A proposed centralised distribution model for supply to independent retailers has been developed which should reduce distribution costs for the automotive component manufacturer in terms of (1 the lowest possible freight rate; (2 timely and controlled delivery; and (3 reduced congestion at the customer's receiving dock. Originality: This research is original in that it explores the possibility of implementing a centralised distribution model for independent retailers in the automotive component industry. Furthermore, there is a dearth of published research on the South African automotive component industry particularly addressing distribution issues. Conclusion: The distribution model as suggested is a practical one and should deliver added value to automotive

  9. Generalized modeling of multi-component vaporization/condensation phenomena for multi-phase-flow analysis

    International Nuclear Information System (INIS)

    Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.

    2003-01-01

    A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)

  10. Revealing the equivalence of two clonal survival models by principal component analysis

    International Nuclear Information System (INIS)

    Lachet, Bernard; Dufour, Jacques

    1976-01-01

    The principal component analysis of 21 chlorella cell survival curves, adjusted by one-hit and two-hit target models, lead to quite similar projections on the principal plan: the homologous parameters of these models are linearly correlated; the reason for the statistical equivalence of these two models, in the present state of experimental inaccuracy, is revealed [fr

  11. A model-based software development methodology for high-end automotive components

    NARCIS (Netherlands)

    Ravanan, Mahmoud

    2014-01-01

    This report provides a model-based software development methodology for high-end automotive components. The V-model is used as a process model throughout the development of the software platform. It offers a framework that simplifies the relation between requirements, design, implementation,

  12. Stability equation and two-component Eigenmode for domain walls in scalar potential model

    International Nuclear Information System (INIS)

    Dias, G.S.; Graca, E.L.; Rodrigues, R. de Lima

    2002-08-01

    Supersymmetric quantum mechanics involving a two-component representation and two-component eigenfunctions is applied to obtain the stability equation associated to a potential model formulated in terms of two coupled real scalar fields. We investigate the question of stability by introducing an operator technique for the Bogomol'nyi-Prasad-Sommerfield (BPS) and non-BPS states on two domain walls in a scalar potential model with minimal N 1-supersymmetry. (author)

  13. Seismic assessment and performance of nonstructural components affected by structural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Jieun; Althoff, Eric; Sezen, Halil; Denning, Richard; Aldemir, Tunc [Ohio State University, Columbus (United States)

    2017-03-15

    Seismic probabilistic risk assessment (SPRA) requires a large number of simulations to evaluate the seismic vulnerability of structural and nonstructural components in nuclear power plants. The effect of structural modeling and analysis assumptions on dynamic analysis of 3D and simplified 2D stick models of auxiliary buildings and the attached nonstructural components is investigated. Dynamic characteristics and seismic performance of building models are also evaluated, as well as the computational accuracy of the models. The presented results provide a better understanding of the dynamic behavior and seismic performance of auxiliary buildings. The results also help to quantify the impact of uncertainties associated with modeling and analysis of simplified numerical models of structural and nonstructural components subjected to seismic shaking on the predicted seismic failure probabilities of these systems.

  14. Enhanced hepatic insulin signaling in the livers of high altitude native rats under basal conditions and in the livers of low altitude native rats under insulin stimulation: a mechanistic study.

    Science.gov (United States)

    Al Dera, Hussain; Eleawa, Samy M; Al-Hashem, Fahaid H; Mahzari, Moeber M; Hoja, Ibrahim; Al Khateeb, Mahmoud

    2017-07-01

    This study was designed to investigate the role of the liver in lowering fasting blood glucose levels (FBG) in rats native to high (HA) and low altitude (LA) areas. As compared with LA natives, besides the improved insulin and glucose tolerance, HA native rats had lower FBG, at least mediated by inhibition of hepatic gluconeogenesis and activation of glycogen synthesis. An effect that is mediated by the enhancement of hepatic insulin signaling mediated by the decreased phosphorylation of TSC induced inhibition of mTOR function. Such effect was independent of activation of AMPK nor stabilization of HIF1α, but most probably due to oxidative stress induced REDD1 expression. However, under insulin stimulation, and in spite of the less activated mTOR function in HA native rats, LA native rats had higher glycogen content and reduced levels of gluconeogenic enzymes with a more enhanced insulin signaling, mainly due to higher levels of p-IRS1 (tyr612).

  15. The n-component cubic model and flows: subgraph break-collapse method

    International Nuclear Information System (INIS)

    Essam, J.W.; Magalhaes, A.C.N. de.

    1988-01-01

    We generalise to the n-component cubic model the subgraph break-collapse method which we previously developed for the Potts model. The relations used are based on expressions which we recently derived for the Z(λ) model in terms of mod-λ flows. Our recursive algorithm is similar, for n = 2, to the break-collapse method for the Z(4) model proposed by Mariz and coworkers. It allows the exact calculation for the partition function and correlation functions for n-component cubic clusters with n as a variable, without the need to examine all of the spin configurations. (author) [pt

  16. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  17. Two component WIMP-FImP dark matter model with singlet fermion, scalar and pseudo scalar

    Energy Technology Data Exchange (ETDEWEB)

    Dutta Banik, Amit; Pandey, Madhurima; Majumdar, Debasish [Saha Institute of Nuclear Physics, HBNI, Astroparticle Physics and Cosmology Division, Kolkata (India); Biswas, Anirban [Harish Chandra Research Institute, Allahabad (India)

    2017-10-15

    We explore a two component dark matter model with a fermion and a scalar. In this scenario the Standard Model (SM) is extended by a fermion, a scalar and an additional pseudo scalar. The fermionic component is assumed to have a global U(1){sub DM} and interacts with the pseudo scalar via Yukawa interaction while a Z{sub 2} symmetry is imposed on the other component - the scalar. These ensure the stability of both dark matter components. Although the Lagrangian of the present model is CP conserving, the CP symmetry breaks spontaneously when the pseudo scalar acquires a vacuum expectation value (VEV). The scalar component of the dark matter in the present model also develops a VEV on spontaneous breaking of the Z{sub 2} symmetry. Thus the various interactions of the dark sector and the SM sector occur through the mixing of the SM like Higgs boson, the pseudo scalar Higgs like boson and the singlet scalar boson. We show that the observed gamma ray excess from the Galactic Centre as well as the 3.55 keV X-ray line from Perseus, Andromeda etc. can be simultaneously explained in the present two component dark matter model and the dark matter self interaction is found to be an order of magnitude smaller than the upper limit estimated from the observational results. (orig.)

  18. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel

    2017-04-06

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys\\' priors, are designed to support Occam\\'s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  19. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel; Rue, Haavard; Riebler, Andrea; Martins, Thiago G.; Sø rbye, Sigrunn H.

    2017-01-01

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  20. Use of 137Cs as a tracer of low-altitude transport of re-suspended aeroso from the African continent

    International Nuclear Information System (INIS)

    Hernandez, F.; Karlsson, L.; Alonso-Perez, S.; Rodriguez, S.; Lopez-Perez, M.; Cuevas, E.; Hernandez-Armas, J.

    2008-01-01

    Northern Africa and the Arabian Peninsula are known sources of dust particles, which, upon certain meteorological conditions, may be transported over long distances (e.g. Goudie and Middleton, 2001). The island of Tenerife is situated approximately 200 km off the coast of Morocco. It is often affected by atmospheric dust intrusions from the African Continent produced by the suspension of aerosol particulate. These atmospheric events produce important increments of PM 10 matter concentrations in the air above the island. In this study, we have analysed the time series of 137 Cs and PM 10 matter recorded in Tenerife, during 2000-2006, at the marine boundary layer (MBL) to test the possible usefulness of the mentioned radiotracers as markers of dust intrusions of African origin. The analysis was supported with results obtained from the HYSPLIT (Hybrid Single-Particle Lagrangian Integrated Trajectory) 4.0 dispersion model and the DREAM (Dust Regional Atmospheric Model) model (author)(tk)

  1. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    Science.gov (United States)

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A comparative study of the proposed models for the components of the national health information system.

    Science.gov (United States)

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the

  3. A discrimination-association model for decomposing component processes of the implicit association test.

    Science.gov (United States)

    Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale

    2013-06-01

    A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.

  4. OSCAR2000 : a multi-component 3-dimensional oil spill contingency and response model

    International Nuclear Information System (INIS)

    Reed, M.; Daling, P.S.; Brakstad, O.G.; Singsaas, I.; Faksness, L.-G.; Hetland, B.; Ekrol, N.

    2000-01-01

    Researchers at SINTEF in Norway have studied the weathering of surface oil. They developed a realistic model to analyze alternative spill response strategies. The model represented the formation and composition of the water-accommodated fraction (WAF) of oil for both treated and untreated oil spills. As many as 25 components, pseudo-components, or metabolites were allowed for the specification of oil. Calculations effected using OSCAR were verified in great detail on numerous occasions. The model made it possible to determine rather realistically the dissolution, transformation, and toxicology of dispersed oil clouds, as well as evaporation, emulsification, and natural dispersion. OSCAR comprised a data-based oil weathering model, a three-dimensional oil trajectory and chemical fates model, an oil spill combat model, exposure models for birds, marine mammals, fish and ichthyoplankton. 17 refs., 1 tab., 11 figs

  5. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  6. Model of the fine-grain component of martian soil based on Viking lander data

    International Nuclear Information System (INIS)

    Nussinov, M.D.; Chernyak, Y.B.; Ettinger, J.L.

    1978-01-01

    A model of the fine-grain component of the Martian soil is proposed. The model is based on well-known physical phenomena, and enables an explanation of the evolution of the gases released in the GEX (gas exchange experiments) and GCMS (gas chromatography-mass spectrometer experiments) of the Viking landers. (author)

  7. Individual differences in anxiety responses to stressful situations : A three-mode component analysis model

    NARCIS (Netherlands)

    Van Mechelen, Iven; Kiers, Henk A.L.

    1999-01-01

    The three-mode component analysis model is discussed as a tool for a contextualized study of personality. When applied to person x situation x response data, the model includes sets of latent dimensions for persons, situations, and responses as well as a so-called core array, which may be considered

  8. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    Science.gov (United States)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  9. MODELING THERMAL DUST EMISSION WITH TWO COMPONENTS: APPLICATION TO THE PLANCK HIGH FREQUENCY INSTRUMENT MAPS

    International Nuclear Information System (INIS)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2015-01-01

    We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales

  10. A new model for the redundancy allocation problem with component mixing and mixed redundancy strategy

    International Nuclear Information System (INIS)

    Gholinezhad, Hadi; Zeinal Hamadani, Ali

    2017-01-01

    This paper develops a new model for redundancy allocation problem. In this paper, like many recent papers, the choice of the redundancy strategy is considered as a decision variable. But, in our model each subsystem can exploit both active and cold-standby strategies simultaneously. Moreover, the model allows for component mixing such that components of different types may be used in each subsystem. The problem, therefore, boils down to determining the types of components, redundancy levels, and number of active and cold-standby units of each type for each subsystem to maximize system reliability by considering such constraints as available budget, weight, and space. Since RAP belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed for solving the problem. Finally, the performance of the proposed algorithm is evaluated by applying it to a well-known test problem from the literature with relatively satisfactory results. - Highlights: • A new model for the redundancy allocation problem in series–parallel systems is proposed. • The redundancy strategy of each subsystem is considered as a decision variable and can be active, cold-standby or mixed. • Component mixing is allowed, in other words components of any subsystem can be non-identical. • A genetic algorithm is developed for solving the problem. • Computational experiments demonstrate that the new model leads to interesting results.

  11. Assessing Internet addiction using the parsimonious Internet addiction components model - a preliminary study [forthcoming

    OpenAIRE

    Kuss, DJ; Shorter, GW; Van Rooij, AJ; Griffiths, MD; Schoenmakers, T

    2014-01-01

    Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (2005), i...

  12. 3-D inelastic analysis methods for hot section components. Volume 2: Advanced special functions models

    Science.gov (United States)

    Wilson, R. B.; Banerjee, P. K.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.

  13. Proportional and scale change models to project failures of mechanical components with applications to space station

    Science.gov (United States)

    Taneja, Vidya S.

    1996-01-01

    In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.

  14. Layout Optimization Model for the Production Planning of Precast Concrete Building Components

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2018-05-01

    Full Text Available Precast concrete comprises the basic components of modular buildings. The efficiency of precast concrete building component production directly impacts the construction time and cost. In the processes of precast component production, mold setting has a significant influence on the production efficiency and cost, as well as reducing the resource consumption. However, the development of mold setting plans is left to the experience of production staff, with outcomes dependent on the quality of human skill and experience available. This can result in sub-optimal production efficiencies and resource wastage. Accordingly, in order to improve the efficiency of precast component production, this paper proposes an optimization model able to maximize the average utilization rate of pallets used during the molding process. The constraints considered were the order demand, the size of the pallet, layout methods, and the positional relationship of components. A heuristic algorithm was used to identify optimization solutions provided by the model. Through empirical analysis, and as exemplified in the case study, this research is significant in offering a prefabrication production planning model which improves pallet utilization rates, shortens component production time, reduces production costs, and improves the resource utilization. The results clearly demonstrate that the proposed method can facilitate the precast production plan providing strong practical implications for production planners.

  15. Characterizing and Modeling the Cost of Rework in a Library of Reusable Software Components

    Science.gov (United States)

    Basili, Victor R.; Condon, Steven E.; ElEmam, Khaled; Hendrick, Robert B.; Melo, Walcelio

    1997-01-01

    In this paper we characterize and model the cost of rework in a Component Factory (CF) organization. A CF is responsible for developing and packaging reusable software components. Data was collected on corrective maintenance activities for the Generalized Support Software reuse asset library located at the Flight Dynamics Division of NASA's GSFC. We then constructed a predictive model of the cost of rework using the C4.5 system for generating a logical classification model. The predictor variables for the model are measures of internal software product attributes. The model demonstrates good prediction accuracy, and can be used by managers to allocate resources for corrective maintenance activities. Furthermore, we used the model to generate proscriptive coding guidelines to improve programming, practices so that the cost of rework can be reduced in the future. The general approach we have used is applicable to other environments.

  16. SASSYS-1 balance-of-plant component models for an integrated plant response

    International Nuclear Information System (INIS)

    Ku, J.-Y.

    1989-01-01

    Models of power plant heat transfer components and rotating machinery have been added to the balance-of-plant model in the SASSYS-1 liquid metal reactor systems analysis code. This work is part of a continuing effort in plant network simulation based on the general mathematical models developed. The models described in this paper extend the scope of the balance-of-plant model to handle non-adiabatic conditions along flow paths. While the mass and momentum equations remain the same, the energy equation now contains a heat source term due to energy transfer across the flow boundary or to work done through a shaft. The heat source term is treated fully explicitly. In addition, the equation of state is rewritten in terms of the quality and separate parameters for each phase. The models are simple enough to run quickly, yet include sufficient detail of dominant plant component characteristics to provide accurate results. 5 refs., 16 figs., 2 tabs

  17. Low-level profiling and MARTE-compatible modeling of software components for real-time systems

    NARCIS (Netherlands)

    Triantafyllidis, K.; Bondarev, E.; With, de P.H.N.

    2012-01-01

    In this paper, we present a method for (a) profiling of individual components at high accuracy level, (b) modeling of the components with the accurate data obtained from profiling, and (c) model conversion to the MARTE profile. The resulting performance models of individual components are used at

  18. Development of the interactive model between Component Cooling Water System and Containment Cooling System using GOTHIC

    International Nuclear Information System (INIS)

    Byun, Choong Sup; Song, Dong Soo; Jun, Hwang Yong

    2006-01-01

    In a design point of view, component cooling water (CCW) system is not full-interactively designed with its heat loads. Heat loads are calculated from the CCW design flow and temperature condition which is determined with conservatism. Then the CCW heat exchanger is sized by using total maximized heat loads from above calculation. This approach does not give the optimized performance results and the exact trends of CCW system and the loads during transient. Therefore a combined model for performance analysis of containment and the component cooling water(CCW) system is developed by using GOTHIC software code. The model is verified by using the design parameters of component cooling water heat exchanger and the heat loads during the recirculation mode of loss of coolant accident scenario. This model may be used for calculating the realistic containment response and CCW performance, and increasing the ultimate heat sink temperature limits

  19. A Component-Based Modeling and Validation Method for PLC Systems

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2014-05-01

    Full Text Available Programmable logic controllers (PLCs are complex embedded systems that are widely used in industry. This paper presents a component-based modeling and validation method for PLC systems using the behavior-interaction-priority (BIP framework. We designed a general system architecture and a component library for a type of device control system. The control software and hardware of the environment were all modeled as BIP components. System requirements were formalized as monitors. Simulation was carried out to validate the system model. A realistic example from industry of the gates control system was employed to illustrate our strategies. We found a couple of design errors during the simulation, which helped us to improve the dependability of the original systems. The results of experiment demonstrated the effectiveness of our approach.

  20. COMPONENT SUPPLY MODEL FOR REPAIR ACTIVITIES NETWORK UNDER CONDITIONS OF PROBABILISTIC INDEFINITENESS.

    Directory of Open Access Journals (Sweden)

    Victor Yurievich Stroganov

    2017-02-01

    Full Text Available This article contains the systematization of the major production functions of repair activities network and the list of planning and control functions, which are described in the form of business processes (BP. Simulation model for analysis of the delivery effectiveness of components under conditions of probabilistic uncertainty was proposed. It has been shown that a significant portion of the total number of business processes is represented by the management and planning of the parts and components movement. Questions of construction of experimental design techniques on the simulation model in the conditions of non-stationarity were considered.

  1. Towards a Complete Model for Software Component Deployment on Heterogeneous Platform

    Directory of Open Access Journals (Sweden)

    Švogor Ivan

    2014-12-01

    Full Text Available This report briefly describes an ongoing research related to optimization of allocating software components to heterogeneous computing platform (which includes CPU, GPU and FPGA. Research goal is also presented, along with current hot topics of the research area, related research teams, and finally results and contribution of my research. It involves mathematical modelling which results in goal function, optimization method which finds a suboptimal solution to the goal function and a software modeling tool which enables graphical representation of the problem at hand and help developers determine component placement in the system design phase.

  2. Optics Elements for Modeling Electrostatic Lenses and Accelerator Components: III. Electrostatic Deflectors

    International Nuclear Information System (INIS)

    Brown, T.A.; Gillespie, G.H.

    1999-01-01

    Ion-beam optics models for simulating electrostatic prisms (deflectors) of different geometries have been developed for the computer code TRACE 3-D. TRACE 3-D is an envelope (matrix) code, which includes a linear space charge model, that was originally developed to model bunched beams in magnetic transport systems and radiofrequency (RF) accelerators. Several new optical models for a number of electrostatic lenses and accelerator columns have been developed recently that allow the code to be used for modeling beamlines and accelerators with electrostatic components. The new models include a number of options for: (1) Einzel lenses, (2) accelerator columns, (3) electrostatic prisms, and (4) electrostatic quadrupoles. A prescription for setting up the initial beam appropriate to modeling 2-D (continuous) beams has also been developed. The models for electrostatic prisms are described in this paper. The electrostatic prism model options allow the modeling of cylindrical, spherical, and toroidal electrostatic deflectors. The application of these models in the development of ion-beam transport systems is illustrated through the modeling of a spherical electrostatic analyzer as a component of the new low energy beamline at CAMS

  3. Pheno-Copter: A Low-Altitude, Autonomous Remote-Sensing Robotic Helicopter for High-Throughput Field-Based Phenotyping

    Directory of Open Access Journals (Sweden)

    Scott C. Chapman

    2014-06-01

    Full Text Available Plant breeding trials are extensive (100s to 1000s of plots and are difficult and expensive to monitor by conventional means, especially where measurements are time-sensitive. For example, in a land-based measure of canopy temperature (hand-held infrared thermometer at two to 10 plots per minute, the atmospheric conditions may change greatly during the time of measurement. Such sensors measure small spot samples (2 to 50 cm2, whereas image-based methods allow the sampling of entire plots (2 to 30 m2. A higher aerial position allows the rapid measurement of large numbers of plots if the altitude is low (10 to 40 m and the flight control is sufficiently precise to collect high-resolution images. This paper outlines the implementation of a customized robotic helicopter (gas-powered, 1.78-m rotor diameter with autonomous flight control and software to plan flights over experiments that were 0.5 to 3 ha in area and, then, to extract, straighten and characterize multiple experimental field plots from images taken by three cameras. With a capacity to carry 1.5 kg for 30 min or 1.1 kg for 60 min, the system successfully completed >150 flights for a total duration of 40 h. Example applications presented here are estimations of the variation in: ground cover in sorghum (early season; canopy temperature in sugarcane (mid-season; and three-dimensional measures of crop lodging in wheat (late season. Together with this hardware platform, improved software to automate the production of ortho-mosaics and digital elevation models and to extract plot data would further benefit the development of high-throughput field-based phenotyping systems.

  4. Component Degradation Susceptibilities As The Bases For Modeling Reactor Aging Risk

    International Nuclear Information System (INIS)

    Unwin, Stephen D.; Lowry, Peter P.; Toyooka, Michael Y.

    2010-01-01

    The extension of nuclear power plant operating licenses beyond 60 years in the United States will be necessary if we are to meet national energy needs while addressing the issues of carbon and climate. Characterizing the operating risks associated with aging reactors is problematic because the principal tool for risk-informed decision-making, Probabilistic Risk Assessment (PRA), is not ideally-suited to addressing aging systems. The components most likely to drive risk in an aging reactor - the passives - receive limited treatment in PRA, and furthermore, standard PRA methods are based on the assumption of stationary failure rates: a condition unlikely to be met in an aging system. A critical barrier to modeling passives aging on the wide scale required for a PRA is that there is seldom sufficient field data to populate parametric failure models, and nor is there the availability of practical physics models to predict out-year component reliability. The methodology described here circumvents some of these data and modeling needs by using materials degradation metrics, integrated with conventional PRA models, to produce risk importance measures for specific aging mechanisms and component types. We suggest that these measures have multiple applications, from the risk-screening of components to the prioritization of materials research.

  5. Common and Critical Components Among Community Health Assessment and Community Health Improvement Planning Models.

    Science.gov (United States)

    Pennel, Cara L; Burdine, James N; Prochaska, John D; McLeroy, Kenneth R

    Community health assessment and community health improvement planning are continuous, systematic processes for assessing and addressing health needs in a community. Since there are different models to guide assessment and planning, as well as a variety of organizations and agencies that carry out these activities, there may be confusion in choosing among approaches. By examining the various components of the different assessment and planning models, we are able to identify areas for coordination, ways to maximize collaboration, and strategies to further improve community health. We identified 11 common assessment and planning components across 18 models and requirements, with a particular focus on health department, health system, and hospital models and requirements. These common components included preplanning; developing partnerships; developing vision and scope; collecting, analyzing, and interpreting data; identifying community assets; identifying priorities; developing and implementing an intervention plan; developing and implementing an evaluation plan; communicating and receiving feedback on the assessment findings and/or the plan; planning for sustainability; and celebrating success. Within several of these components, we discuss characteristics that are critical to improving community health. Practice implications include better understanding of different models and requirements by health departments, hospitals, and others involved in assessment and planning to improve cross-sector collaboration, collective impact, and community health. In addition, federal and state policy and accreditation requirements may be revised or implemented to better facilitate assessment and planning collaboration between health departments, hospitals, and others for the purpose of improving community health.

  6. A new model for predicting thermodynamic properties of ternary metallic solution from binary components

    International Nuclear Information System (INIS)

    Fang Zheng; Zhang Quanru

    2006-01-01

    A model has been derived to predict thermodynamic properties of ternary metallic systems from those of its three binaries. In the model, the excess Gibbs free energies and the interaction parameter ω 123 for three components of a ternary are expressed as a simple sum of those of the three sub-binaries, and the mole fractions of the components of the ternary are identical with the sub-binaries. This model is greatly simplified compared with the current symmetrical and asymmetrical models. It is able to overcome some shortcomings of the current models, such as the arrangement of the components in the Gibbs triangle, the conversion of mole fractions between ternary and corresponding binaries, and some necessary processes for optimizing the various parameters of these models. Two ternary systems, Mg-Cu-Ni and Cd-Bi-Pb are recalculated to demonstrate the validity and precision of the present model. The calculated results on the Mg-Cu-Ni system are better than those in the literature. New parameters in the Margules equations expressing the excess Gibbs free energies of three binary systems of the Cd-Bi-Pb ternary system are also given

  7. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    Energy Technology Data Exchange (ETDEWEB)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the model to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.

  8. Verification of the component accuracy prediction obtained by physical modelling and the elastic simulation of the die/component interaction

    DEFF Research Database (Denmark)

    Ravn, Bjarne Gottlieb; Andersen, Claus Bo; Wanheim, Tarras

    2001-01-01

    There are three demands on a component that must undergo a die-cavity elasticity analysis. The demands to the product are specified as: (i) to be able to measure the loading profile which results in elestic die-cavity deflections; (ii) to be able to compute the elestic deflections using FE; (iii...

  9. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....

  10. Refinement and verification in component-based model-driven design

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Liu, Zhiming; Ravn, Anders Peter

    2009-01-01

    Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object-o...... be integrated in computer-aided software engineering (CASE) tools for adding formally supported checking, transformation and generation facilities.......Modern software development is complex as it has to deal with many different and yet related aspects of applications. In practical software engineering this is now handled by a UML-like modelling approach in which different aspects are modelled by different notations. Component-based and object...

  11. Statistical intercomparison of global climate models: A common principal component approach with application to GCM data

    International Nuclear Information System (INIS)

    Sengupta, S.K.; Boyle, J.S.

    1993-05-01

    Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution

  12. Photonic Beamformer Model Based on Analog Fiber-Optic Links’ Components

    International Nuclear Information System (INIS)

    Volkov, V A; Gordeev, D A; Ivanov, S I; Lavrov, A P; Saenko, I I

    2016-01-01

    The model of photonic beamformer for wideband microwave phased array antenna is investigated. The main features of the photonic beamformer model based on true-time-delay technique, DWDM technology and fiber chromatic dispersion are briefly analyzed. The performance characteristics of the key components of photonic beamformer for phased array antenna in the receive mode are examined. The beamformer model composed of the components available on the market of fiber-optic analog communication links is designed and tentatively investigated. Experimental demonstration of the designed model beamforming features includes actual measurement of 5-element microwave linear array antenna far-field patterns in 6-16 GHz frequency range for antenna pattern steering up to 40°. The results of experimental testing show good accordance with the calculation estimates. (paper)

  13. Optics elements for modeling electrostatic lenses and accelerator components: III. Electrostatic deflectors

    International Nuclear Information System (INIS)

    Brown, T.A.; Gillespie, G.H.

    2000-01-01

    Ion-beam optics models for simulating electrostatic prisms (deflectors) of different geometries have been developed for the envelope (matrix) computer code TRACE 3-D as a part of the development of a suite of electrostatic beamline element models which includes lenses, acceleration columns, quadrupoles and prisms. The models for electrostatic prisms are described in this paper. The electrostatic prism model options allow the first-order modeling of cylindrical, spherical and toroidal electrostatic deflectors. The application of these models in the development of ion-beam transport systems is illustrated through the modeling of a spherical electrostatic analyzer as a component of the new low-energy beamline at the Center for Accelerator Mass Spectrometry. Although initial tests following installation of the new beamline showed that the new spherical electrostatic analyzer was not behaving as predicted by these first-order models, operational conditions were found under which the analyzer now works properly as a double-focusing spherical electrostatic prism

  14. A review of multi-component maintenance models with economic dependence

    NARCIS (Netherlands)

    R. Dekker (Rommert); R.E. Wildeman (Ralph); F.A. van der Duyn Schouten (Frank)

    1997-01-01

    textabstractIn this paper we review the literature on multi-component maintenance models with economic dependence. The emphasis is on papers that appeared after 1991, but there is an overlap with Section 2 of the most recent review paper by Cho and Parlar (1991). We distinguish between stationary

  15. Specification and Generation of Environment for Model Checking of Software Components

    Czech Academy of Sciences Publication Activity Database

    Pařízek, P.; Plášil, František

    2007-01-01

    Roč. 176, - (2007), s. 143-154 ISSN 1571-0661 R&D Projects: GA AV ČR 1ET400300504 Institutional research plan: CEZ:AV0Z10300504 Keywords : software components * behavior protocols * model checking * automated generation of environment Subject RIV: JC - Computer Hardware ; Software

  16. Helpful Components Involved in the Cognitive-Experiential Model of Dream Work

    Science.gov (United States)

    Tien, Hsiu-Lan Shelley; Chen, Shuh-Chi; Lin, Chia-Huei

    2009-01-01

    The purpose of the study was to examine the helpful components involved in the Hill's cognitive-experiential dream work model. Participants were 27 volunteer clients from colleges and universities in northern and central parts of Taiwan. Each of the clients received 1-2 sessions of dream interpretations. The cognitive-experiential dream work model…

  17. A Bayesian analysis of the PPP puzzle using an unobserved components model

    NARCIS (Netherlands)

    R.H. Kleijn (Richard); H.K. van Dijk (Herman)

    2001-01-01

    textabstractThe failure to describe the time series behaviour of most real exchange rates as temporary deviations from fixed long-term means may be due to time variation of the equilibria themselves, see Engel (2000). We implement this idea using an unobserved components model and decompose the

  18. Passively model-locked Nd: YAG laser with a component GaAs

    International Nuclear Information System (INIS)

    Zhang Zhuhong; Qian Liejia; Chen Shaohe; Fan Dianyuan; Mao Hongwei

    1992-01-01

    An all solid-state passively mode-locked Nd: YAG laser with a 400 μm, (100) oriented GaAs component is reported for the first time and model locked pulses with a duration of 16 ps, average energy of 10 μJ were obtained with a probability of 90%

  19. Thermodynamically consistent modeling and simulation of multi-component two-phase flow with partial miscibility

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2017-01-01

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive

  20. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    Theoretical framework. In the present work, the dark matter candidate has two components S and S′ both of ... The scalar sector potential (for Higgs and two real singlet scalars) in this framework can then be written .... In this work we obtain the allowed values of model parameters (δ2, δ′2, MS and M′S) using three direct ...

  1. Ontologies to Support RFID-Based Link between Virtual Models and Construction Components

    DEFF Research Database (Denmark)

    Sørensen, Kristian Birch; Christiansson, Per; Svidt, Kjeld

    2010-01-01

    the virtual models and the physical components in the construction process can improve the information handling and sharing in construction and building operation management. Such a link can be created by means of Radio Frequency Identification (RFID) technology. Ontologies play an important role...

  2. Correlation inequalities for two-component hypercubic φ4 models. Pt. 2

    International Nuclear Information System (INIS)

    Soria, J.L.; Instituto Tecnologico de Tijuana

    1990-01-01

    We continue the program started in the first paper (J. Stat. Phys. 52 (1988) 711-726). We find new and already known correlation inequalities for a family of two-component hypercubic φ 4 models, using techniques of rotated correlation inequalities and random walk representation. (orig.)

  3. A model for determining condition-based maintenance policies for deteriorating multi-component systems

    NARCIS (Netherlands)

    Hontelez, J.A.M.; Wijnmalen, D.J.D.

    1993-01-01

    We discuss a method to determine strategies for preventive maintenance of systems consisting of gradually deteriorating components. A model has been developed to compute not only the range of conditions inducing a repair action, but also inspection moments based on the last known condition value so

  4. Quantifying functional connectivity in multi-subject fMRI data using component models

    DEFF Research Database (Denmark)

    Madsen, Kristoffer Hougaard; Churchill, Nathan William; Mørup, Morten

    2017-01-01

    of functional connectivity, evaluated on both simulated and experimental resting-state fMRI data. It was demonstrated that highly flexible subject-specific component subspaces, as well as very constrained average models, are poor predictors of whole-brain functional connectivity, whereas the best...

  5. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  6. Around power law for PageRank components in Buckley-Osthus model of web graph

    OpenAIRE

    Gasnikov, Alexander; Zhukovskii, Maxim; Kim, Sergey; Noskov, Fedor; Plaunov, Stepan; Smirnov, Daniil

    2017-01-01

    In the paper we investigate power law for PageRank components for the Buckley-Osthus model for web graph. We compare different numerical methods for PageRank calculation. With the best method we do a lot of numerical experiments. These experiments confirm the hypothesis about power law. At the end we discuss real model of web-ranking based on the classical PageRank approach.

  7. The use of error components models in business finance. : a review article and an application

    OpenAIRE

    Καραθανάσης, Γεώργιος Α.; Φίλιππας, Νικόλαος

    1993-01-01

    This study applies and tests several stock valuation models of companies whose shares are traded in the Athens Stock Exchange. The relevant equations are estimated for the five major sectors of the Athens Stock Exchange (Banks, Textiles, Foods, Buildings, Commercials) using a specification which combines cross sectional and time series data. This is the Error Components Model. In view of the results obtained the most important variables across sectors appear to be dividends fol...

  8. Component simulation in problems of calculated model formation of automatic machine mechanisms

    OpenAIRE

    Telegin Igor; Kozlov Alexander; Zhirkov Alexander

    2017-01-01

    The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gap...

  9. Modelling the Load Curve of Aggregate Electricity Consumption Using Principal Components

    OpenAIRE

    Matteo Manera; Angelo Marzullo

    2003-01-01

    Since oil is a non-renewable resource with a high environmental impact, and its most common use is to produce combustibles for electricity, reliable methods for modelling electricity consumption can contribute to a more rational employment of this hydrocarbon fuel. In this paper we apply the Principal Components (PC) method to modelling the load curves of Italy, France and Greece on hourly data of aggregate electricity consumption. The empirical results obtained with the PC approach are compa...

  10. Space-time latent component Modeling of Geo-referenced health data

    OpenAIRE

    Lawson, Andrew B.; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-01-01

    Latent structure models have been proposed in many applications. For space time health data it is often important to be able to find underlying trends in time which are supported by subsets of small areas. Latent structure modeling is one approach to this analysis. This paper presents a mixture-based approach that can be appied to component selction. The analysis of a Georgia ambulatory asthma county level data set is presented and a simulation-based evaluation is made.

  11. Modelling temporal variance of component temperatures and directional anisotropy over vegetated canopy

    Science.gov (United States)

    Bian, Zunjian; du, yongming; li, hua

    2016-04-01

    Land surface temperature (LST) as a key variable plays an important role on hydrological, meteorology and climatological study. Thermal infrared directional anisotropy is one of essential factors to LST retrieval and application on longwave radiance estimation. Many approaches have been proposed to estimate directional brightness temperatures (DBT) over natural and urban surfaces. While less efforts focus on 3-D scene and the surface component temperatures used in DBT models are quiet difficult to acquire. Therefor a combined 3-D model of TRGM (Thermal-region Radiosity-Graphics combined Model) and energy balance method is proposed in the paper for the attempt of synchronously simulation of component temperatures and DBT in the row planted canopy. The surface thermodynamic equilibrium can be final determined by the iteration strategy of TRGM and energy balance method. The combined model was validated by the top-of-canopy DBTs using airborne observations. The results indicated that the proposed model performs well on the simulation of directional anisotropy, especially the hotspot effect. Though we find that the model overestimate the DBT with Bias of 1.2K, it can be an option as a data reference to study temporal variance of component temperatures and DBTs when field measurement is inaccessible

  12. Mixture modeling of multi-component data sets with application to ion-probe zircon ages

    Science.gov (United States)

    Sambridge, M. S.; Compston, W.

    1994-12-01

    A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.

  13. Hierarchical modeling of systems with similar components: A framework for adaptive monitoring and control

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo; Kolter, J. Zico

    2016-01-01

    System management includes the selection of maintenance actions depending on the available observations: when a system is made up by components known to be similar, data collected on one is also relevant for the management of others. This is typically the case of wind farms, which are made up by similar turbines. Optimal management of wind farms is an important task due to high cost of turbines' operation and maintenance: in this context, we recently proposed a method for planning and learning at system-level, called PLUS, built upon the Partially Observable Markov Decision Process (POMDP) framework, which treats transition and emission probabilities as random variables, and is therefore suitable for including model uncertainty. PLUS models the components as independent or identical. In this paper, we extend that formulation, allowing for a weaker similarity among components. The proposed approach, called Multiple Uncertain POMDP (MU-POMDP), models the components as POMDPs, and assumes the corresponding parameters as dependent random variables. Through this framework, we can calibrate specific degradation and emission models for each component while, at the same time, process observations at system-level. We compare the performance of the proposed MU-POMDP with PLUS, and discuss its potential and computational complexity. - Highlights: • A computational framework is proposed for adaptive monitoring and control. • It adopts a scheme based on Markov Chain Monte Carlo for inference and learning. • Hierarchical Bayesian modeling is used to allow a system-level flow of information. • Results show potential of significant savings in management of wind farms.

  14. Low-Altitude Aerial Methane Concentration Mapping

    Directory of Open Access Journals (Sweden)

    Bara J. Emran

    2017-08-01

    Full Text Available Detection of leaks of fugitive greenhouse gases (GHGs from landfills and natural gas infrastructure is critical for not only their safe operation but also for protecting the environment. Current inspection practices involve moving a methane detector within the target area by a person or vehicle. This procedure is dangerous, time consuming, labor intensive and above all unavailable when access to the desired area is limited. Remote sensing by an unmanned aerial vehicle (UAV equipped with a methane detector is a cost-effective and fast method for methane detection and monitoring, especially for vast and remote areas. This paper describes the integration of an off-the-shelf laser-based methane detector into a multi-rotor UAV and demonstrates its efficacy in generating an aerial methane concentration map of a landfill. The UAV flies a preset flight path measuring methane concentrations in a vertical air column between the UAV and the ground surface. Measurements were taken at 10 Hz giving a typical distance between measurements of 0.2 m when flying at 2 m/s. The UAV was set to fly at 25 to 30 m above the ground. We conclude that besides its utility in landfill monitoring, the proposed method is ready for other environmental applications as well as the inspection of natural gas infrastructure that can release methane with much higher concentrations.

  15. Computational models for residual creep life prediction of power plant components

    International Nuclear Information System (INIS)

    Grewal, G.S.; Singh, A.K.; Ramamoortry, M.

    2006-01-01

    All high temperature - high pressure power plant components are prone to irreversible visco-plastic deformation by the phenomenon of creep. The steady state creep response as well as the total creep life of a material is related to the operational component temperature through, respectively, the exponential and inverse exponential relationships. Minor increases in the component temperature can thus have serious consequences as far as the creep life and dimensional stability of a plant component are concerned. In high temperature steam tubing in power plants, one mechanism by which a significant temperature rise can occur is by the growth of a thermally insulating oxide film on its steam side surface. In the present paper, an elegantly simple and computationally efficient technique is presented for predicting the residual creep life of steel components subjected to continual steam side oxide film growth. Similarly, fabrication of high temperature power plant components involves extensive use of welding as the fabrication process of choice. Naturally, issues related to the creep life of weldments have to be seriously addressed for safe and continual operation of the welded plant component. Unfortunately, a typical weldment in an engineering structure is a zone of complex microstructural gradation comprising of a number of distinct sub-zones with distinct meso-scale and micro-scale morphology of the phases and (even) chemistry and its creep life prediction presents considerable challenges. The present paper presents a stochastic algorithm, which can be' used for developing experimental creep-cavitation intensity versus residual life correlations for welded structures. Apart from estimates of the residual life in a mean field sense, the model can be used for predicting the reliability of the plant component in a rigorous probabilistic setting. (author)

  16. The Tripartite Model of Risk Perception (TRIRISK): Distinguishing Deliberative, Affective, and Experiential Components of Perceived Risk.

    Science.gov (United States)

    Ferrer, Rebecca A; Klein, William M P; Persoskie, Alexander; Avishai-Yitshak, Aya; Sheeran, Paschal

    2016-10-01

    Although risk perception is a key predictor in health behavior theories, current conceptions of risk comprise only one (deliberative) or two (deliberative vs. affective/experiential) dimensions. This research tested a tripartite model that distinguishes among deliberative, affective, and experiential components of risk perception. In two studies, and in relation to three common diseases (cancer, heart disease, diabetes), we used confirmatory factor analyses to examine the factor structure of the tripartite risk perception (TRIRISK) model and compared the fit of the TRIRISK model to dual-factor and single-factor models. In a third study, we assessed concurrent validity by examining the impact of cancer diagnosis on (a) levels of deliberative, affective, and experiential risk perception, and (b) the strength of relations among risk components, and tested predictive validity by assessing relations with behavioral intentions to prevent cancer. The tripartite factor structure was supported, producing better model fit across diseases (studies 1 and 2). Inter-correlations among the components were significantly smaller among participants who had been diagnosed with cancer, suggesting that affected populations make finer-grained distinctions among risk perceptions (study 3). Moreover, all three risk perception components predicted unique variance in intentions to engage in preventive behavior (study 3). The TRIRISK model offers both a novel conceptualization of health-related risk perceptions, and new measures that enhance predictive validity beyond that engendered by unidimensional and bidimensional models. The present findings have implications for the ways in which risk perceptions are targeted in health behavior change interventions, health communications, and decision aids.

  17. EM Simulation Accuracy Enhancement for Broadband Modeling of On-Wafer Passive Components

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Jiang, Chenhui; Hadziabdic, Dzenan

    2007-01-01

    This paper describes methods for accuracy enhancement in broadband modeling of on-wafer passive components using electromagnetic (EM) simulation. It is shown that standard excitation schemes for integrated component simulation leads to poor correlation with on-wafer measurements beyond the lower...... GHz frequency range. We show that this is due to parasitic effects and higher-order modes caused by the excitation schemes. We propose a simple equivalent circuit for the parasitic effects in the well-known ground ring excitation scheme. An extended L-2L calibration method is shown to improve...

  18. Research on The Construction of Flexible Multi-body Dynamics Model based on Virtual Components

    Science.gov (United States)

    Dong, Z. H.; Ye, X.; Yang, F.

    2018-05-01

    Focus on the harsh operation condition of space manipulator, which cannot afford relative large collision momentum, this paper proposes a new concept and technology, called soft-contact technology. In order to solve the problem of collision dynamics of flexible multi-body system caused by this technology, this paper also proposes the concepts of virtual components and virtual hinges, and constructs flexible dynamic model based on virtual components, and also studies on its solutions. On this basis, this paper uses NX to carry out model and comparison simulation for space manipulator in 3 different modes. The results show that using the model of multi-rigid body + flexible body hinge + controllable damping can make effective control on amplitude for the force and torque caused by target satellite collision.

  19. Finsler Geometry Modeling of Phase Separation in Multi-Component Membranes

    Directory of Open Access Journals (Sweden)

    Satoshi Usui

    2016-08-01

    Full Text Available A Finsler geometric surface model is studied as a coarse-grained model for membranes of three components, such as zwitterionic phospholipid (DOPC, lipid (DPPC and an organic molecule (cholesterol. To understand the phase separation of liquid-ordered (DPPC rich L o and liquid-disordered (DOPC rich L d , we introduce a binary variable σ ( = ± 1 into the triangulated surface model. We numerically determine that two circular and stripe domains appear on the surface. The dependence of the morphological change on the area fraction of L o is consistent with existing experimental results. This provides us with a clear understanding of the origin of the line tension energy, which has been used to understand these morphological changes in three-component membranes. In addition to these two circular and stripe domains, a raft-like domain and budding domain are also observed, and the several corresponding phase diagrams are obtained.

  20. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  1. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  2. Using Patient Demographics and Statistical Modeling to Predict Knee Tibia Component Sizing in Total Knee Arthroplasty.

    Science.gov (United States)

    Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James

    2018-06-01

    Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Modeling and numerical simulation of multi-component flow in porous media

    International Nuclear Information System (INIS)

    Saad, B.

    2011-01-01

    This work deals with the modelization and numerical simulation of two phase multi-component flow in porous media. The study is divided into two parts. First we study and prove the mathematical existence in a weak sense of two degenerate parabolic systems modeling two phase (liquid and gas) two component (water and hydrogen) flow in porous media. In the first model, we assume that there is a local thermodynamic equilibrium between both phases of hydrogen by using the Henry's law. The second model consists of a relaxation of the previous model: the kinetic of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is no longer instantaneous. The second part is devoted to the numerical analysis of those models. Firstly, we propose a numerical scheme to compare numerical solutions obtained with the first model and numerical solutions obtained with the second model where the characteristic time to recover the thermodynamic equilibrium goes to zero. Secondly, we present a finite volume scheme with a phase-by-phase upstream weighting scheme without simplified assumptions on the state law of gas densities. We also validate this scheme on a 2D test cases. (author)

  4. Steady-State Plant Model to Predict Hydroden Levels in Power Plant Components

    Energy Technology Data Exchange (ETDEWEB)

    Glatzmaier, Greg C.; Cable, Robert; Newmarker, Marc

    2017-06-27

    The National Renewable Energy Laboratory (NREL) and Acciona Energy North America developed a full-plant steady-state computational model that estimates levels of hydrogen in parabolic trough power plant components. The model estimated dissolved hydrogen concentrations in the circulating heat transfer fluid (HTF), and corresponding partial pressures within each component. Additionally for collector field receivers, the model estimated hydrogen pressure in the receiver annuli. The model was developed to estimate long-term equilibrium hydrogen levels in power plant components, and to predict the benefit of hydrogen mitigation strategies for commercial power plants. Specifically, the model predicted reductions in hydrogen levels within the circulating HTF that result from purging hydrogen from the power plant expansion tanks at a specified target rate. Our model predicted hydrogen partial pressures from 8.3 mbar to 9.6 mbar in the power plant components when no mitigation treatment was employed at the expansion tanks. Hydrogen pressures in the receiver annuli were 8.3 to 8.4 mbar. When hydrogen partial pressure was reduced to 0.001 mbar in the expansion tanks, hydrogen pressures in the receiver annuli fell to a range of 0.001 mbar to 0.02 mbar. When hydrogen partial pressure was reduced to 0.3 mbar in the expansion tanks, hydrogen pressures in the receiver annuli fell to a range of 0.25 mbar to 0.28 mbar. Our results show that controlling hydrogen partial pressure in the expansion tanks allows us to reduce and maintain hydrogen pressures in the receiver annuli to any practical level.

  5. Femoral Component External Rotation Affects Knee Biomechanics: A Computational Model of Posterior-stabilized TKA.

    Science.gov (United States)

    Kia, Mohammad; Wright, Timothy M; Cross, Michael B; Mayman, David J; Pearle, Andrew D; Sculco, Peter K; Westrich, Geoffrey H; Imhauser, Carl W

    2018-01-01

    The correct amount of external rotation of the femoral component during TKA is controversial because the resulting changes in biomechanical knee function associated with varying degrees of femoral component rotation are not well understood. We addressed this question using a computational model, which allowed us to isolate the biomechanical impact of geometric factors including bony shapes, location of ligament insertions, and implant size across three different knees after posterior-stabilized (PS) TKA. Using a computational model of the tibiofemoral joint, we asked: (1) Does external rotation unload the medial collateral ligament (MCL) and what is the effect on lateral collateral ligament tension? (2) How does external rotation alter tibiofemoral contact loads and kinematics? (3) Does 3° external rotation relative to the posterior condylar axis align the component to the surgical transepicondylar axis (sTEA) and what anatomic factors of the femoral condyle explain variations in maximum MCL tension among knees? We incorporated a PS TKA into a previously developed computational knee model applied to three neutrally aligned, nonarthritic, male cadaveric knees. The computational knee model was previously shown to corroborate coupled motions and ligament loading patterns of the native knee through a range of flexion. Implant geometries were virtually installed using hip-to-ankle CT scans through measured resection and anterior referencing surgical techniques. Collateral ligament properties were standardized across each knee model by defining stiffness and slack lengths based on the healthy population. The femoral component was externally rotated from 0° to 9° relative to the posterior condylar axis in 3° increments. At each increment, the knee was flexed under 500 N compression from 0° to 90° simulating an intraoperative examination. The computational model predicted collateral ligament forces, compartmental contact forces, and tibiofemoral internal/external and

  6. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  7. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  8. Invasion percolation of single component, multiphase fluids with lattice Boltzmann models

    International Nuclear Information System (INIS)

    Sukop, M.C.; Or, Dani

    2003-01-01

    Application of the lattice Boltzmann method (LBM) to invasion percolation of single component multiphase fluids in porous media offers an opportunity for more realistic modeling of the configurations and dynamics of liquid/vapor and liquid/solid interfaces. The complex geometry of connected paths in standard invasion percolation models arises solely from the spatial arrangement of simple elements on a lattice. In reality, fluid interfaces and connectivity in porous media are naturally controlled by the details of the pore geometry, its dynamic interaction with the fluid, and the ambient fluid potential. The multiphase LBM approach admits realistic pore geometry derived from imaging techniques and incorporation of realistic hydrodynamics into invasion percolation models

  9. Modelling with Relational Calculus of Object and Component Systems - rCOS

    DEFF Research Database (Denmark)

    Chen, Zhenbang; Hannousse, Abdel Hakim; Hung, Dang Van

    2008-01-01

    This chapter presents a formalization of functional and behavioural requirements, and a refinement of requirements to a design for CoCoME using the Relational Calculus of Object and Component Systems (rCOS). We give a model of requirements based on an abstraction of the use cases described...... in Chapter 3.2. Then the refinement calculus of rCOS is used to derive design models corresponding to the top level designs of Chapter 3.4. We demonstrate how rCOS supports modelling different views and their relationships of the system and the separation of concerns in the development....

  10. Detailed measurements and modelling of thermo active components using a room size test facility

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    measurements in an office sized test facility with thermo active ceiling and floor as well as modelling of similar conditions in a computer program designed for analysis of building integrated heating and cooling systems. A method for characterizing the cooling capacity of thermo active components is described...... typically within 1-2K of the measured results. The simulation model, whose room model splits up the radiative and convective heat transfer between room and surfaces, can also be used to predict the dynamical conditions, where especially the temperature rise during the day is important for designing...

  11. Design logistics performance measurement model of automotive component industry for srengthening competitiveness of dealing AEC 2015

    Science.gov (United States)

    Amran, T. G.; Janitra Yose, Mindy

    2018-03-01

    As the free trade Asean Economic Community (AEC) causes the tougher competition, it is important that Indonesia’s automotive industry have high competitiveness as well. A model of logistics performance measurement was designed as an evaluation tool for automotive component companies to improve their logistics performance in order to compete in AEC. The design of logistics performance measurement model was based on the Logistics Scorecard perspectives, divided into two stages: identifying the logistics business strategy to get the KPI and arranging the model. 23 KPI was obtained. The measurement result can be taken into consideration of determining policies to improve the performance logistics competitiveness.

  12. Equivalent water height extracted from GRACE gravity field model with robust independent component analysis

    Science.gov (United States)

    Guo, Jinyun; Mu, Dapeng; Liu, Xin; Yan, Haoming; Dai, Honglei

    2014-08-01

    The Level-2 monthly GRACE gravity field models issued by Center for Space Research (CSR), GeoForschungs Zentrum (GFZ), and Jet Propulsion Laboratory (JPL) are treated as observations used to extract the equivalent water height (EWH) with the robust independent component analysis (RICA). The smoothing radii of 300, 400, and 500 km are tested, respectively, in the Gaussian smoothing kernel function to reduce the observation Gaussianity. Three independent components are obtained by RICA in the spatial domain; the first component matches the geophysical signal, and the other two match the north-south strip and the other noises. The first mode is used to estimate EWHs of CSR, JPL, and GFZ, and compared with the classical empirical decorrelation method (EDM). The EWH STDs for 12 months in 2010 extracted by RICA and EDM show the obvious fluctuation. The results indicate that the sharp EWH changes in some areas have an important global effect, like in Amazon, Mekong, and Zambezi basins.

  13. The ORC method. Effective modelling of thermal performance of multilayer building components

    Energy Technology Data Exchange (ETDEWEB)

    Akander, Jan

    2000-02-01

    The ORC Method (Optimised RC-networks) provides a means of modelling one- or multidimensional heat transfer in building components, in this context within building simulation environments. The methodology is shown, primarily applied to heat transfer in multilayer building components. For multilayer building components, the analytical thermal performance is known, given layer thickness and material properties. The aim of the ORC Method is to optimise the values of the thermal resistances and heat capacities of an RC-model such as to give model performance a good agreement with the analytical performance, for a wide range of frequencies. The optimisation procedure is made in the frequency domain, where the over-all deviation between model and analytical frequency response, in terms of admittance and dynamic transmittance, is minimised. It is shown that ORC's are effective in terms of accuracy and computational time in comparison to finite difference models when used in building simulations, in this case with IDA/ICE. An ORC configuration of five mass nodes has been found to model building components in Nordic countries well, within the application of thermal comfort and energy requirement simulations. Simple RC-networks, such as the surface heat capacity and the simple R-C-configuration are not appropriate for detailed building simulation. However, these can be used as basis for defining the effective heat capacity of a building component. An approximate method is suggested on how to determine the effective heat capacity without the use of complex numbers. This entity can be calculated on basis of layer thickness and material properties with the help of two time constants. The approximate method can give inaccuracies corresponding to 20%. In-situ measurements have been carried out in an experimental building with the purpose of establishing the effective heat capacity of external building components that are subjected to normal thermal conditions. The auxiliary

  14. Antidepressant Potentials of Components from Trichilia monadelpha (Thonn. J.J. de Wilde in Murine Models

    Directory of Open Access Journals (Sweden)

    Kennedy Kwami Edem Kukuia

    2018-01-01

    Full Text Available Trichilia monadelpha is a common medicinal plant used traditionally in treating central nervous system conditions such as epilepsy, depression, pain, and psychosis. In this study, the antidepressant-like effect of crude extracts of the stem bark of T. monadelpha was investigated using two classical murine models, forced swimming test (FST and tail suspension test (TST. The extracts, petroleum ether, ethyl acetate, and hydroethanolic extracts (30–300 mg/kg, p.o., standard drug (imipramine; fluoxetine, 3–30 mg/kg, p.o., and saline (vehicle were given to mice one hour prior to the acute study. In a separate experiment the components (flavonoids, saponins, alkaloids, tannins, and terpenoids; 30–300 mg/kg, p.o. from the most efficacious extract fraction were screened to ascertain which components possessed the antidepressant effect. All the extracts and components significantly induced a decline in immobility in the FST and TST, indicative of an antidepressant-like activity. The extracts and some components showed increase in swimming and climbing in the FST as well as a significant enhancement in swinging and/or curling scores in the TST, suggesting a possible involvement of monoaminergic and/or opioidergic activity. This study reveals the antidepressant-like potential of the stem bark extracts and components of T. monadelpha.

  15. Superfluid drag in the two-component Bose-Hubbard model

    Science.gov (United States)

    Sellin, Karl; Babaev, Egor

    2018-03-01

    In multicomponent superfluids and superconductors, co- and counterflows of components have, in general, different properties. A. F. Andreev and E. P. Bashkin [Sov. Phys. JETP 42, 164 (1975)] discussed, in the context of He3/He4 superfluid mixtures, that interparticle interactions produce a dissipationless drag. The drag can be understood as a superflow of one component induced by phase gradients of the other component. Importantly, the drag can be both positive (entrainment) and negative (counterflow). The effect is known to have crucial importance for many properties of diverse physical systems ranging from the dynamics of neutron stars and rotational responses of Bose mixtures of ultracold atoms to magnetic responses of multicomponent superconductors. Although substantial literature exists that includes the drag interaction phenomenologically, only a few regimes are covered by quantitative studies of the microscopic origin of the drag and its dependence on microscopic parameters. Here we study the microscopic origin and strength of the drag interaction in a quantum system of two-component bosons on a lattice with short-range interaction. By performing quantum Monte Carlo simulations of a two-component Bose-Hubbard model we obtain dependencies of the drag strength on the boson-boson interactions and properties of the optical lattice. Of particular interest are the strongly correlated regimes where the ratio of coflow and counterflow superfluid stiffnesses can diverge, corresponding to the case of saturated drag.

  16. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  17. Simplifying and upscaling water resources systems models that combine natural and engineered components

    Science.gov (United States)

    McIntyre, N.; Keir, G.

    2014-12-01

    Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.

  18. Blind Separation of Acoustic Signals Combining SIMO-Model-Based Independent Component Analysis and Binary Masking

    Directory of Open Access Journals (Sweden)

    Hiekata Takashi

    2006-01-01

    Full Text Available A new two-stage blind source separation (BSS method for convolutive mixtures of speech is proposed, in which a single-input multiple-output (SIMO-model-based independent component analysis (ICA and a new SIMO-model-based binary masking are combined. SIMO-model-based ICA enables us to separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources in their original form at the microphones. Thus, the separated signals of SIMO-model-based ICA can maintain the spatial qualities of each sound source. Owing to this attractive property, our novel SIMO-model-based binary masking can be applied to efficiently remove the residual interference components after SIMO-model-based ICA. The experimental results reveal that the separation performance can be considerably improved by the proposed method compared with that achieved by conventional BSS methods. In addition, the real-time implementation of the proposed BSS is illustrated.

  19. User's guide to the weather model: a component of the western spruce budworm modeling system.

    Science.gov (United States)

    W. P. Kemp; N. L. Crookston; P. W. Thomas

    1989-01-01

    A stochastic model useful in simulating daily maximum and minimum temperature and precipitation developed by Bruhn and others has been adapted for use in the western spruce budworm modeling system. This document describes how to use the weather model and illustrates some aspects of its behavior.

  20. Resolution and Probabilistic Models of Components in CryoEM Maps of Mature P22 Bacteriophage

    Science.gov (United States)

    Pintilie, Grigore; Chen, Dong-Hua; Haase-Pettingell, Cameron A.; King, Jonathan A.; Chiu, Wah

    2016-01-01

    CryoEM continues to produce density maps of larger and more complex assemblies with multiple protein components of mixed symmetries. Resolution is not always uniform throughout a cryoEM map, and it can be useful to estimate the resolution in specific molecular components of a large assembly. In this study, we present procedures to 1) estimate the resolution in subcomponents by gold-standard Fourier shell correlation (FSC); 2) validate modeling procedures, particularly at medium resolutions, which can include loop modeling and flexible fitting; and 3) build probabilistic models that combine high-accuracy priors (such as crystallographic structures) with medium-resolution cryoEM densities. As an example, we apply these methods to new cryoEM maps of the mature bacteriophage P22, reconstructed without imposing icosahedral symmetry. Resolution estimates based on gold-standard FSC show the highest resolution in the coat region (7.6 Å), whereas other components are at slightly lower resolutions: portal (9.2 Å), hub (8.5 Å), tailspike (10.9 Å), and needle (10.5 Å). These differences are indicative of inherent structural heterogeneity and/or reconstruction accuracy in different subcomponents of the map. Probabilistic models for these subcomponents provide new insights, to our knowledge, and structural information when taking into account uncertainty given the limitations of the observed density. PMID:26743049

  1. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  2. The multi-component model of working memory: explorations in experimental cognitive psychology.

    Science.gov (United States)

    Repovs, G; Baddeley, A

    2006-04-28

    There are a number of ways one can hope to describe and explain cognitive abilities, each of them contributing a unique and valuable perspective. Cognitive psychology tries to develop and test functional accounts of cognitive systems that explain the capacities and properties of cognitive abilities as revealed by empirical data gathered by a range of behavioral experimental paradigms. Much of the research in the cognitive psychology of working memory has been strongly influenced by the multi-component model of working memory [Baddeley AD, Hitch GJ (1974) Working memory. In: Recent advances in learning and motivation, Vol. 8 (Bower GA, ed), pp 47-90. New York: Academic Press; Baddeley AD (1986) Working memory. Oxford, UK: Clarendon Press; Baddeley A. Working memory: Thought and action. Oxford: Oxford University Press, in press]. By expanding the notion of a passive short-term memory to an active system that provides the basis for complex cognitive abilities, the model has opened up numerous questions and new lines of research. In this paper we present the current revision of the multi-component model that encompasses a central executive, two unimodal storage systems: a phonological loop and a visuospatial sketchpad, and a further component, a multimodal store capable of integrating information into unitary episodic representations, termed episodic buffer. We review recent empirical data within experimental cognitive psychology that has shaped the development of the multicomponent model and the understanding of the capacities and properties of working memory. Research based largely on dual-task experimental designs and on neuropsychological evidence has yielded valuable information about the fractionation of working memory into independent stores and processes, the nature of representations in individual stores, the mechanisms of their maintenance and manipulation, the way the components of working memory relate to each other, and the role they play in other

  3. A Modeling Framework to Investigate the Radial Component of the Pushrim Force in Manual Wheelchair Propulsion

    Directory of Open Access Journals (Sweden)

    Ackermann Marko

    2015-01-01

    Full Text Available The ratio of tangential to total pushrim force, the so-called Fraction Effective Force (FEF, has been used to evaluate wheelchair propulsion efficiency based on the fact that only the tangential component of the force on the pushrim contributes to actual wheelchair propulsion. Experimental studies, however, consistently show low FEF values and recent experimental as well as modelling investigations have conclusively shown that a more tangential pushrim force direction can lead to a decrease and not increase in propulsion efficiency. This study aims at quantifying the contributions of active, inertial and gravitational forces to the normal pushrim component. In order to achieve this goal, an inverse dynamics-based framework is proposed to estimate individual contributions to the pushrim forces using a model of the wheelchair-user system. The results show that the radial pushrim force component arise to a great extent due to purely mechanical effects, including inertial and gravitational forces. These results corroborate previous findings according to which radial pushrim force components are not necessarily a result of inefficient propulsion strategies or hand-rim friction requirements. This study proposes a novel framework to quantify the individual contributions of active, inertial and gravitational forces to pushrim forces during wheelchair propulsion.

  4. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  5. Thermodynamically consistent modeling and simulation of multi-component two-phase flow model with partial miscibility

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2016-01-01

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest

  6. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    Science.gov (United States)

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  7. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-06-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  8. A Model of Yeast Cell-Cycle Regulation Based on a Standard Component Modeling Strategy for Protein Regulatory Networks.

    Directory of Open Access Journals (Sweden)

    Teeraphan Laomettachit

    Full Text Available To understand the molecular mechanisms that regulate cell cycle progression in eukaryotes, a variety of mathematical modeling approaches have been employed, ranging from Boolean networks and differential equations to stochastic simulations. Each approach has its own characteristic strengths and weaknesses. In this paper, we propose a "standard component" modeling strategy that combines advantageous features of Boolean networks, differential equations and stochastic simulations in a framework that acknowledges the typical sorts of reactions found in protein regulatory networks. Applying this strategy to a comprehensive mechanism of the budding yeast cell cycle, we illustrate the potential value of standard component modeling. The deterministic version of our model reproduces the phenotypic properties of wild-type cells and of 125 mutant strains. The stochastic version of our model reproduces the cell-to-cell variability of wild-type cells and the partial viability of the CLB2-dbΔ clb5Δ mutant strain. Our simulations show that mathematical modeling with "standard components" can capture in quantitative detail many essential properties of cell cycle control in budding yeast.

  9. General model for Pc-based simulation of PWR and BWR plant components

    Energy Technology Data Exchange (ETDEWEB)

    Ratemi, W M; Abomustafa, A M [Faculty of enginnering, alfateh univerity Tripoli, (Libyan Arab Jamahiriya)

    1995-10-01

    In this paper, we present a basic mathematical model derived from physical principles to suit the simulation of PWR-components such as pressurizer, intact steam generator, ruptured steam generator, and the reactor component of a BWR-plant. In our development, we produced an NMMS-package for nuclear modular modelling simulation. Such package is installed on a personal computer and it is designed to be user friendly through color graphics windows interfacing. The package works under three environments, namely, pre-processor, simulation, and post-processor. Our analysis of results using cross graphing technique for steam generator tube rupture (SGTR) accident, yielded a new proposal for on-line monitoring of control strategy of SGTR-accident for nuclear or conventional power plant. 4 figs.

  10. The Internet addiction components model and personality: establishing construct validity via a nomological network

    OpenAIRE

    Kuss, DJ; Shorter, GW; Van Rooij, AJ; Van de Mheen, D; Griffiths, MD

    2014-01-01

    There is growing concern over excessive and sometimes problematic Internet use. Drawing upon the framework of the components model of addiction (Griffiths, 2005), Internet addiction appears as behavioural addiction characterised by the following symptoms: salience, withdrawal, tolerance, mood modification, relapse and conflict. A number of factors have been associated with an increased risk for Internet addiction, including personality traits. The overall aim of this study was to establish th...

  11. Evaluation of low dose ionizing radiation effect on some blood components in animal model

    OpenAIRE

    El-Shanshoury, H.; El-Shanshoury, G.; Abaza, A.

    2016-01-01

    Exposure to ionizing radiation is known to have lethal effects in blood cells. It is predicted that an individual may spend days, weeks or even months in a radiation field without becoming alarmed. The study aimed to discuss the evaluation of low dose ionizing radiation (IR) effect on some blood components in animal model. Hematological parameters were determined for 110 animal rats (divided into 8 groups) pre- and post-irradiation. An attempt to explain the blood changes resulting from both ...

  12. Some results of model calculations of the solar S-component radio emission

    International Nuclear Information System (INIS)

    Krueger, A.; Hildebrandt, J.

    1985-01-01

    Numerical calculations of special characteristics of the solar S-component microwave radiation are presented on the basis of recent sunspot and plage models. Quantitative results are discussed and can be used for the plasma diagnostics of solar active regions by comparisons with observations with high spatial and spectral resolution. The possibility of generalized applications to magnetic stars and stellar activity is briefly noted. (author)

  13. The Effect of Multidimensional Motivation Interventions on Cognitive and Behavioral Components of Motivation: Testing Martin's Model

    OpenAIRE

    Fatemeh PooraghaRoodbarde; Siavash Talepasand; Issac Rahimian Boogar

    2017-01-01

    Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation.Methods: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group) constituted the sample of the study, and they were selected by available sampling method. Motivation inter...

  14. High Cost/High Risk Components to Chalcogenide Molded Lens Model: Molding Preforms and Mold Technology

    Energy Technology Data Exchange (ETDEWEB)

    Bernacki, Bruce E.

    2012-10-05

    This brief report contains a critique of two key components of FiveFocal's cost model for glass compression molding of chalcogenide lenses for infrared applications. Molding preforms and mold technology have the greatest influence on the ultimate cost of the product and help determine the volumes needed to select glass molding over conventional single-point diamond turning or grinding and polishing. This brief report highlights key areas of both technologies with recommendations for further study.

  15. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  16. Activity Recognition Using A Combination of Category Components And Local Models for Video Surveillance

    OpenAIRE

    Lin, Weiyao; Sun, Ming-Ting; Poovendran, Radha; Zhang, Zhengyou

    2015-01-01

    This paper presents a novel approach for automatic recognition of human activities for video surveillance applications. We propose to represent an activity by a combination of category components, and demonstrate that this approach offers flexibility to add new activities to the system and an ability to deal with the problem of building models for activities lacking training data. For improving the recognition accuracy, a Confident-Frame- based Recognition algorithm is also proposed, where th...

  17. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, Stephen [Cleveland State Univ., Cleveland, OH (United States)

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  18. A Computational Model of Torque Generation: Neural, Contractile, Metabolic and Musculoskeletal Components

    Science.gov (United States)

    Callahan, Damien M.; Umberger, Brian R.; Kent-Braun, Jane A.

    2013-01-01

    The pathway of voluntary joint torque production includes motor neuron recruitment and rate-coding, sarcolemmal depolarization and calcium release by the sarcoplasmic reticulum, force generation by motor proteins within skeletal muscle, and force transmission by tendon across the joint. The direct source of energetic support for this process is ATP hydrolysis. It is possible to examine portions of this physiologic pathway using various in vivo and in vitro techniques, but an integrated view of the multiple processes that ultimately impact joint torque remains elusive. To address this gap, we present a comprehensive computational model of the combined neuromuscular and musculoskeletal systems that includes novel components related to intracellular bioenergetics function. Components representing excitatory drive, muscle activation, force generation, metabolic perturbations, and torque production during voluntary human ankle dorsiflexion were constructed, using a combination of experimentally-derived data and literature values. Simulation results were validated by comparison with torque and metabolic data obtained in vivo. The model successfully predicted peak and submaximal voluntary and electrically-elicited torque output, and accurately simulated the metabolic perturbations associated with voluntary contractions. This novel, comprehensive model could be used to better understand impact of global effectors such as age and disease on various components of the neuromuscular system, and ultimately, voluntary torque output. PMID:23405245

  19. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    International Nuclear Information System (INIS)

    Duffy, Stephen

    2013-01-01

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  20. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  1. Measurement and modeling of shortwave irradiance components in cloud-free atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Halthore, R.N.

    1999-08-04

    Atmosphere scatters and absorbs incident solar radiation modifying its spectral content and decreasing its intensity at the surface. It is very useful to classify the earth-atmospheric solar radiation into several components--direct solar surface irradiance (E{sub direct}), diffuse-sky downward surface irradiance (E{sub diffuse}), total surface irradiance, and upwelling flux at the surface and at the top-of-the atmosphere. E{sub direct} depends only on the extinction properties of the atmosphere without regard to details of extinction, namely scattering or absorption; furthermore it can be accurately measured to high accuracy (0.3%) with the aid of an active cavity radiometer (ACR). E{sub diffuse} has relatively larger uncertainties both in its measurement using shaded pyranometers and in model estimates, owing to the difficulty in accurately characterizing pyranometers and in measuring model inputs such as surface reflectance, aerosol single scattering albedo, and phase function. Radiative transfer model simulations of the above surface radiation components in cloud-free skies using measured atmospheric properties show that while E{sub direct} estimates are closer to measurements, E{sub diffuse} is overestimated by an amount larger than the combined uncertainties in model inputs and measurements, illustrating a fundamental gap in the understanding of the magnitude of atmospheric absorption in cloud-free skies. The excess continuum type absorption required to reduce the E{sub diffuse} model overestimate ({approximately}3--8% absorptance) would significantly impact climate prediction and remote sensing. It is not clear at present what the source for this continuum absorption is. Here issues related to measurements and modeling of the surface irradiance components are discussed.

  2. Three-component model of solar wind--interstellar medium interaction: some numerical results

    International Nuclear Information System (INIS)

    Baranov, V.; Ermakov, M.; Lebedev, M.

    1981-01-01

    A three-component (electrons, protons, H atoms) model for the interaction between the local interstellar medium and the solar wind is considered. A numerical analysis has been performed to determine how resonance charge exchange in interstellar H atoms that have penetrated the solar wind would affect the two-shock model developed previously by Baranov et al. In particular, if n/sub Hinfinity//n/sub e/infinity>10 (n/sub Hinfinity/, n/sub e/infinity denote the number density of H atoms and electrons in the local ISM) the inner shock may approach the sun as closely as the outer planetary orbits

  3. Space-time latent component modeling of geo-referenced health data.

    Science.gov (United States)

    Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-08-30

    Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.

  4. Transformation of renormalization groups in 2N-component fermion hierarchical model

    International Nuclear Information System (INIS)

    Stepanov, R.G.

    2006-01-01

    The 2N-component fermion model on the hierarchical lattice is studied. The explicit formulae for renormalization groups transformation in the space of coefficients setting the Grassmannian-significant density of the free measure are presented. The inverse transformation of the renormalization group is calculated. The definition of immovable points of renormalization groups is reduced to solving the set of algebraic equations. The interesting connection between renormalization group transformations in boson and fermion hierarchical models is found out. It is shown that one transformation is obtained from other one by the substitution of N on -N [ru

  5. Mixture estimation with state-space components and Markov model of switching

    Czech Academy of Sciences Publication Activity Database

    Nagy, Ivan; Suzdaleva, Evgenia

    2013-01-01

    Roč. 37, č. 24 (2013), s. 9970-9984 ISSN 0307-904X R&D Projects: GA TA ČR TA01030123 Institutional support: RVO:67985556 Keywords : probabilistic dynamic mixtures, * probability density function * state-space models * recursive mixture estimation * Bayesian dynamic decision making under uncertainty * Kerridge inaccuracy Subject RIV: BC - Control Systems Theory Impact factor: 2.158, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/nagy-mixture estimation with state-space components and markov model of switching.pdf

  6. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    International Nuclear Information System (INIS)

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study

  7. Integrated model-experimental framework to assess carbon cycle components in disturbed mountainous terrain

    Science.gov (United States)

    Stenzel, J.; Hudiburg, T. W.; Berardi, D.; McNellis, B.; Walsh, E.

    2017-12-01

    In forests vulnerable to drought and fire, there is critical need for in situ carbon and water balance measurements that can be integrated with earth system modeling to predict climate feedbacks. Model development can be improved by measurements that inform a mechanistic understanding of the component fluxes of net carbon uptake (i.e., NPP, autotrophic and heterotrophic respiration) and water use, with specific focus on responses to climate and disturbance. By integrating novel field-based instrumental technology, existing datasets, and state-of-the-art earth system modeling, we are attempting to 1) quantify the spatial and temporal impacts of forest thinning on regional biogeochemical cycling and climate 2) evaluate the impact of forest thinning on forest resilience to drought and disturbance in the Northern Rockies ecoregion. The combined model-experimental framework enables hypothesis testing that would otherwise be impossible because the use of new in situ high temporal resolution field technology allows for research in remote and mountainous terrains that have been excluded from eddy-covariance techniques. Our preliminary work has revealed some underlying difficulties with the new instrumentation that has led to new ideas and modified methods to correctly measure the component fluxes. Our observations of C balance following the thinning operations indicate that the recovery period (source to sink) is longer than hypothesized. Finally, we have incorporated a new plant functional type parameterization for Northern Rocky mixed-conifer into our simulation modeling using regional and site observations.

  8. The Effect of Multidimensional Motivation Interventions on Cognitive and Behavioral Components of Motivation: Testing Martin's Model

    Directory of Open Access Journals (Sweden)

    Fatemeh PooraghaRoodbarde

    2017-04-01

    Full Text Available Objective: The present study aimed at examining the effect of multidimensional motivation interventions based on Martin's model on cognitive and behavioral components of motivation.Methods: The research design was prospective with pretest, posttest, and follow-up, and 2 experimental groups. In this study, 90 students (45 participants in the experimental group and 45 in the control group constituted the sample of the study, and they were selected by available sampling method. Motivation interventions were implemented for fifteen 60-minute sessions 3 times a week, which lasted for about 2 months. Data were analyzed using repeated measures multivariate variance analysis test.Results: The findings revealed that multidimensional motivation interventions resulted in a significant increase in the scores of cognitive components such as self-efficacy, mastery goal, test anxiety, and feeling of lack of control, and behavioral components such as task management. The results of one-month follow-up indicated the stability of the created changes in test anxiety and cognitive strategies; however, no significant difference was found between the 2 groups at the follow-up in self-efficacy, mastery goals, source of control, and motivation.Conclusions: The research evidence indicated that academic motivation is a multidimensional component and is affected by cognitive and behavioral factors; therefore, researchers, teachers, and other authorities should attend to these factors to increase academic motivation.

  9. Component simulation in problems of calculated model formation of automatic machine mechanisms

    Directory of Open Access Journals (Sweden)

    Telegin Igor

    2017-01-01

    Full Text Available The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gaps in kinematic pairs, friction forces, design and technological loads. As an example in the paper there are considered a formalization of stages in the computer model formation of the cutting mechanism in cold stamping automatic machine AV1818 and methods of for the computation of their parameters on the basis of its solid-state model.

  10. A two component model describing nucleon structure functions in the low-x region

    Energy Technology Data Exchange (ETDEWEB)

    Bugaev, E.V. [Institute for Nuclear Research of the Russian Academy of Sciences, 7a, 60th October Anniversary prospect, Moscow 117312 (Russian Federation); Mangazeev, B.V. [Irkutsk State University, 1, Karl Marx Street, Irkutsk 664003 (Russian Federation)

    2009-12-15

    A two component model describing the electromagnetic nucleon structure functions in the low-x region, based on generalized vector dominance and color dipole approaches is briefly described. The model operates with the mesons of rho-family having the mass spectrum of the form m{sub n}{sup 2}=m{sub r}ho{sup 2}(1+2n) and takes into account the nondiagonal transitions in meson-nucleon scattering. The special cut-off factors are introduced in the model, to exclude the gamma-qq-bar-V transitions in the case of narrow qq-bar-pairs. For the color dipole part of the model the well known FKS-parameterization is used.

  11. Level shift two-components autoregressive conditional heteroscedasticity modelling for WTI crude oil market

    Science.gov (United States)

    Sin, Kuek Jia; Cheong, Chin Wen; Hooi, Tan Siow

    2017-04-01

    This study aims to investigate the crude oil volatility using a two components autoregressive conditional heteroscedasticity (ARCH) model with the inclusion of abrupt jump feature. The model is able to capture abrupt jumps, news impact, clustering volatility, long persistence volatility and heavy-tailed distributed error which are commonly observed in the crude oil time series. For the empirical study, we have selected the WTI crude oil index from year 2000 to 2016. The results found that by including the multiple-abrupt jumps in ARCH model, there are significant improvements of estimation evaluations as compared with the standard ARCH models. The outcomes of this study can provide useful information for risk management and portfolio analysis in the crude oil markets.

  12. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Science.gov (United States)

    O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D

    2015-01-01

    Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.

  13. A molecular systems approach to modelling human skin pigmentation: identifying underlying pathways and critical components.

    Science.gov (United States)

    Raghunath, Arathi; Sambarey, Awanti; Sharma, Neha; Mahadevan, Usha; Chandra, Nagasuma

    2015-04-29

    Ultraviolet radiations (UV) serve as an environmental stress for human skin, and result in melanogenesis, with the pigment melanin having protective effects against UV induced damage. This involves a dynamic and complex regulation of various biological processes that results in the expression of melanin in the outer most layers of the epidermis, where it can exert its protective effect. A comprehensive understanding of the underlying cross talk among different signalling molecules and cell types is only possible through a systems perspective. Increasing incidences of both melanoma and non-melanoma skin cancers necessitate the need to better comprehend UV mediated effects on skin pigmentation at a systems level, so as to ultimately evolve knowledge-based strategies for efficient protection and prevention of skin diseases. A network model for UV-mediated skin pigmentation in the epidermis was constructed and subjected to shortest path analysis. Virtual knock-outs were carried out to identify essential signalling components. We describe a network model for UV-mediated skin pigmentation in the epidermis. The model consists of 265 components (nodes) and 429 directed interactions among them, capturing the manner in which one component influences the other and channels information. Through shortest path analysis, we identify novel signalling pathways relevant to pigmentation. Virtual knock-outs or perturbations of specific nodes in the network have led to the identification of alternate modes of signalling as well as enabled determining essential nodes in the process. The model presented provides a comprehensive picture of UV mediated signalling manifesting in human skin pigmentation. A systems perspective helps provide a holistic purview of interconnections and complexity in the processes leading to pigmentation. The model described here is extensive yet amenable to expansion as new data is gathered. Through this study, we provide a list of important proteins essential

  14. A four-component model of age-related memory change.

    Science.gov (United States)

    Healey, M Karl; Kahana, Michael J

    2016-01-01

    We develop a novel, computationally explicit, theory of age-related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that include aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates 4 components: (a) the ability to sustain attention across an encoding episode, (b) the ability to retrieve contextual representations for use as retrieval cues, (c) the ability to monitor retrievals and reject intrusions, and (d) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the 4-component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus, we provide a 4-component theory of a complex pattern of age differences across 2 key laboratory tasks. (c) 2015 APA, all rights reserved).

  15. A Four–Component Model of Age–Related Memory Change

    Science.gov (United States)

    Healey, M. Karl; Kahana, Michael J.

    2015-01-01

    We develop a novel, computationally explicit, theory of age–related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that includes aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates four components: 1) the ability to sustain attention across an encoding episode, 2) the ability to retrieve contextual representations for use as retrieval cues, 3) the ability to monitor retrievals and reject intrusions, and 4) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the four–component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus we provide a four–component theory of a complex pattern of age differences across two key laboratory tasks. PMID:26501233

  16. Forward modelling of multi-component induction logging tools in layered anisotropic dipping formations

    International Nuclear Information System (INIS)

    Gao, Jie; Xu, Chenhao; Xiao, Jiaqi

    2013-01-01

    Multi-component induction logging provides great assistance in the exploration of thinly laminated reservoirs. The 1D parametric inversion following an adaptive borehole correction is the key step in the data processing of multi-component induction logging responses. To make the inversion process reasonably fast, an efficient forward modelling method is necessary. In this paper, a modelling method has been developed to simulate the multi-component induction tools in deviated wells drilled in layered anisotropic formations. With the introduction of generalized reflection coefficients, the analytic expressions of magnetic field in the form of a Sommerfeld integral were derived. The fast numerical computation of the integral has been completed by using the fast Fourier–Hankel transform and fast Hankel transform methods. The latter is so time efficient that it is competent enough for real-time multi-parameter inversion. In this paper, some simulated results have been presented and they are in excellent agreement with the finite difference method code's solution. (paper)

  17. What Time Is Sunrise? Revisiting the Refraction Component of Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.; Hilton, James Lindsay

    2017-01-01

    Algorithms that predict sunrise and sunset times currently have an error of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, even including difficulties determining when the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction. We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We then compare these predictions with data sets of observed rise/set times to create a better model. Sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem. While there are a few data sets available, we will also begin collecting this data using smartphones as part of a citizen science project. The mobile application for this project will be available in the Google Play store. Data analysis will lead to more complete models that will provide more accurate rise/set times for the benefit of astronomers, navigators, and outdoorsmen everywhere.

  18. Bee venom and its component apamin as neuroprotective agents in a Parkinson disease mouse model.

    Science.gov (United States)

    Alvarez-Fischer, Daniel; Noelker, Carmen; Vulinović, Franca; Grünewald, Anne; Chevarin, Caroline; Klein, Christine; Oertel, Wolfgang H; Hirsch, Etienne C; Michel, Patrick P; Hartmann, Andreas

    2013-01-01

    Bee venom has recently been suggested to possess beneficial effects in the treatment of Parkinson disease (PD). For instance, it has been observed that bilateral acupoint stimulation of lower hind limbs with bee venom was protective in the acute 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) mouse model of PD. In particular, a specific component of bee venom, apamin, has previously been shown to have protective effects on dopaminergic neurons in vitro. However, no information regarding a potential protective action of apamin in animal models of PD is available to date. The specific goals of the present study were to (i) establish that the protective effect of bee venom for dopaminergic neurons is not restricted to acupoint stimulation, but can also be observed using a more conventional mode of administration and to (ii) demonstrate that apamin can mimic the protective effects of a bee venom treatment on dopaminergic neurons. Using the chronic mouse model of MPTP/probenecid, we show that bee venom provides sustained protection in an animal model that mimics the chronic degenerative process of PD. Apamin, however, reproduced these protective effects only partially, suggesting that other components of bee venom enhance the protective action of the peptide.

  19. Isotropic vs. anisotropic components of BAO data: a tool for model selection

    Science.gov (United States)

    Haridasu, Balakrishna S.; Luković, Vladimir V.; Vittorio, Nicola

    2018-05-01

    We conduct a selective analysis of the isotropic (DV) and anisotropic (AP) components of the most recent Baryon Acoustic Oscillations (BAO) data. We find that these components provide significantly different constraints and could provide strong diagnostics for model selection, also in view of more precise data to arrive. For instance, in the ΛCDM model we find a mild tension of ~ 2 σ for the Ωm estimates obtained using DV and AP separately. Considering both Ωk and w as free parameters, we find that the concordance model is in tension with the best-fit values provided by the BAO data alone at 2.2σ. We complemented the BAO data with the Supernovae Ia (SNIa) and Observational Hubble datasets to perform a joint analysis on the ΛCDM model and its standard extensions. By assuming ΛCDM scenario, we find that these data provide H0 = 69.4 ± 1.7 km/s Mpc‑1 as the best-fit value for the present expansion rate. In the kΛCDM scenario we find that the evidence for acceleration using the BAO data alone is more than ~ 5.8σ, which increases to 8.4 σ in our joint analysis.

  20. A Simplified Multipath Component Modeling Approach for High-Speed Train Channel Based on Ray Tracing

    Directory of Open Access Journals (Sweden)

    Jingya Yang

    2017-01-01

    Full Text Available High-speed train (HST communications at millimeter-wave (mmWave band have received a lot of attention due to their numerous high-data-rate applications enabling smart rail mobility. Accurate and effective channel models are always critical to the HST system design, assessment, and optimization. A distinctive feature of the mmWave HST channel is that it is rapidly time-varying. To depict this feature, a geometry-based multipath model is established for the dominant multipath behavior in delay and Doppler domains. Because of insufficient mmWave HST channel measurement with high mobility, the model is developed by a measurement-validated ray tracing (RT simulator. Different from conventional models, the temporal evolution of dominant multipath behavior is characterized by its geometry factor that represents the geometrical relationship of the dominant multipath component (MPC to HST environment. Actually, during each dominant multipath lifetime, its geometry factor is fixed. To statistically model the geometry factor and its lifetime, the dominant MPCs are extracted within each local wide-sense stationary (WSS region and are tracked over different WSS regions to identify its “birth” and “death” regions. Then, complex attenuation of dominant MPC is jointly modeled by its delay and Doppler shift both which are derived from its geometry factor. Finally, the model implementation is verified by comparison between RT simulated and modeled delay and Doppler spreads.

  1. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  2. Towards a three-component model of fan loyalty: a case study of Chinese youth.

    Science.gov (United States)

    Zhang, Xiao-xiao; Liu, Li; Zhao, Xian; Zheng, Jian; Yang, Meng; Zhang, Ji-qi

    2015-01-01

    The term "fan loyalty" refers to the loyalty felt and expressed by a fan towards the object of his/her fanaticism in both everyday and academic discourses. However, much of the literature on fan loyalty has paid little attention to the topic from the perspective of youth pop culture. The present study explored the meaning of fan loyalty in the context of China. Data were collected by the method of in-depth interviews with 16 young Chinese people aged between 19 and 25 years who currently or once were pop fans. The results indicated that fan loyalty entails three components: involvement, satisfaction, and affiliation. These three components regulate the process of fan loyalty development, which can be divided into four stages: inception, upgrade, zenith, and decline. This model provides a conceptual explanation of why and how young Chinese fans are loyal to their favorite stars. The implications of the findings are discussed.

  3. Towards a three-component model of fan loyalty: a case study of Chinese youth.

    Directory of Open Access Journals (Sweden)

    Xiao-xiao Zhang

    Full Text Available The term "fan loyalty" refers to the loyalty felt and expressed by a fan towards the object of his/her fanaticism in both everyday and academic discourses. However, much of the literature on fan loyalty has paid little attention to the topic from the perspective of youth pop culture. The present study explored the meaning of fan loyalty in the context of China. Data were collected by the method of in-depth interviews with 16 young Chinese people aged between 19 and 25 years who currently or once were pop fans. The results indicated that fan loyalty entails three components: involvement, satisfaction, and affiliation. These three components regulate the process of fan loyalty development, which can be divided into four stages: inception, upgrade, zenith, and decline. This model provides a conceptual explanation of why and how young Chinese fans are loyal to their favorite stars. The implications of the findings are discussed.

  4. Strange statistics, braid group representations and multipoint functions in the N-component model

    International Nuclear Information System (INIS)

    Lee, H.C.; Ge, M.L.; Couture, M.; Wu, Y.S.

    1989-01-01

    The statistics of fields in low dimensions is studied from the point of view of the braid group B n of n strings. Explicit representations M R for the N-component model, N = 2 to 5, are derived by solving the Yang-Baxter-like braid group relations for the statistical matrix R, which describes the transformation of the bilinear product of two N-component fields under the transposition of coordinates. When R 2 not equal to 1 the statistics is neither Bose-Einstein nor Fermi-Dirac; it is strange. It is shown that for each N, the N + 1 parameter family of solutions obtained is the most general one under a given set of constraints including charge conservation. Extended Nth order (N > 2) Alexander-Conway relations for link polynomials are derived. They depend nonhomogeneously only on one of the N + 1 parameters. The N = 3 and 4 ones agree with those previously derived

  5. Model of components in a process of acoustic diagnosis correlated with learning

    International Nuclear Information System (INIS)

    Seballos, S.; Costabal, H.; Matamala, P.

    1992-06-01

    Using Linden's functional scheme as a theoretical reference framework, we define a matrix of component for clinical and field applications in the acoustic diagnostic process and correlations with audiologic, learning and behavioral problems. It is expected that the model effectively contributes to classify and provide a greater knowledge about this multidisciplinary problem. Although the exact nature of this component is at present a matter to be defined, its correlation can be hypothetically established. Applying this descriptive and integral approach in the diagnostic process it is possible if not to avoid, at least to decrease, the uncertainties and assure the proper solutions becoming a powerful tool applicable to environmental studies and/or social claims. (author). 8 refs, 2 figs

  6. A Three-Component Model for Magnetization Transfer. Solution by Projection-Operator Technique, and Application to Cartilage

    Science.gov (United States)

    Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.

    1996-01-01

    A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.

  7. Research on CO2 ejector component efficiencies by experiment measurement and distributed-parameter modeling

    International Nuclear Information System (INIS)

    Zheng, Lixing; Deng, Jianqiang

    2017-01-01

    Highlights: • The ejector distributed-parameter model is developed to study ejector efficiencies. • Feasible component and total efficiency correlations of ejector are established. • New efficiency correlations are applied to obtain dynamic characteristics of EERC. • More suitable fixed efficiency value can be determined by the proposed correlations. - Abstract: In this study we combine the experimental measurement data and the theoretical model of ejector to determine CO 2 ejector component efficiencies including the motive nozzle, suction chamber, mixing section, diffuser as well as the total ejector efficiency. The ejector is modeled utilizing the distributed-parameter method, and the flow passage is divided into a number of elements and the governing equations are formulated based on the differential equation of mass, momentum and energy conservation. The efficiencies of ejector are investigated under different ejector geometric parameters and operational conditions, and the corresponding empirical correlations are established. Moreover, the correlations are incorporated into a transient model of transcritical CO 2 ejector expansion refrigeration cycle (EERC) and the dynamic simulations is performed based on variable component efficiencies and fixed values. The motive nozzle, suction chamber, mixing section and diffuser efficiencies vary from 0.74 to 0.89, 0.86 to 0.96, 0.73 to 0.9 and 0.75 to 0.95 under the studied conditions, respectively. The response diversities of suction flow pressure and discharge pressure are obvious between the variable efficiencies and fixed efficiencies referring to the previous studies, while when the fixed value is determined by the presented correlations, their response differences are basically the same.

  8. Identifying spikes and seasonal components in electricity spot price data: A guide to robust modeling

    International Nuclear Information System (INIS)

    Janczura, Joanna; Trück, Stefan; Weron, Rafał; Wolff, Rodney C.

    2013-01-01

    An important issue in fitting stochastic models to electricity spot prices is the estimation of a component to deal with trends and seasonality in the data. Unfortunately, estimation routines for the long-term and short-term seasonal pattern are usually quite sensitive to extreme observations, known as electricity price spikes. Improved robustness of the model can be achieved by (a) filtering the data with some reasonable procedure for outlier detection, and then (b) using estimation and testing procedures on the filtered data. In this paper we examine the effects of different treatments of extreme observations on model estimation and on determining the number of spikes (outliers). In particular we compare results for the estimation of the seasonal and stochastic components of electricity spot prices using either the original or filtered data. We find significant evidence for a superior estimation of both the seasonal short-term and long-term components when the data have been treated carefully for outliers. Overall, our findings point out the substantial impact the treatment of extreme observations may have on these issues and, therefore, also on the pricing of electricity derivatives like futures and option contracts. An added value of our study is the ranking of different filtering techniques used in the energy economics literature, suggesting which methods could be and which should not be used for spike identification. - Highlights: • First comprehensive study on the impact of spikes on seasonal pattern estimation • The effects of different treatments of spikes on model estimation are examined. • Cleaning spot prices for outliers yields superior estimates of the seasonal pattern. • Removing outliers provides better parameter estimates for the stochastic process. • Rankings of filtering techniques suggested in the literature are provided

  9. Improved predictive model for n-decane kinetics across species, as a component of hydrocarbon mixtures.

    Science.gov (United States)

    Merrill, E A; Gearhart, J M; Sterner, T R; Robinson, P J

    2008-07-01

    n-Decane is considered a major component of various fuels and industrial solvents. These hydrocarbon products are complex mixtures of hundreds of components, including straight-chain alkanes, branched chain alkanes, cycloalkanes, diaromatics, and naphthalenes. Human exposures to the jet fuel, JP-8, or to industrial solvents in vapor, aerosol, and liquid forms all have the potential to produce health effects, including immune suppression and/or neurological deficits. A physiologically based pharmacokinetic (PBPK) model has previously been developed for n-decane, in which partition coefficients (PC), fitted to 4-h exposure kinetic data, were used in preference to measured values. The greatest discrepancy between fitted and measured values was for fat, where PC values were changed from 250-328 (measured) to 25 (fitted). Such a large change in a critical parameter, without any physiological basis, greatly impedes the model's extrapolative abilities, as well as its applicability for assessing the interactions of n-decane or similar alkanes with other compounds in a mixture model. Due to these limitations, the model was revised. Our approach emphasized the use of experimentally determined PCs because many tissues had not approached steady-state concentrations by the end of the 4-h exposures. Diffusion limitation was used to describe n-decane kinetics for the brain, perirenal fat, skin, and liver. Flow limitation was used to describe the remaining rapidly and slowly perfused tissues. As expected from the high lipophilicity of this semivolatile compound (log K(ow) = 5.25), sensitivity analyses showed that parameters describing fat uptake were next to blood:air partitioning and pulmonary ventilation as critical in determining overall systemic circulation and uptake in other tissues. In our revised model, partitioning into fat took multiple days to reach steady state, which differed considerably from the previous model that assumed steady-state conditions in fat at 4 h post

  10. High frequent modelling of a modular multilevel converter using passive components

    DEFF Research Database (Denmark)

    El-Khatib, Walid Ziad; Holbøll, Joachim; Rasmussen, Tonny Wederberg

    2013-01-01

    ). This means that a high frequency model of the converter has to be designed, which gives a better overview of the impact of high frequency transients etc. The functionality of the model is demonstrated by application to grid connections of off-shore wind power plants. Grid connection of an offshore wind power...... wind power plant employing HVDC. In the present study, a back to back HVDC transmission system is designed in PSCAD/EMTDC. Simulations and results showing the importance of high frequent modeling are presented....... plant using HVDC fundamentally changes the electrical environment for the power plant. Detailed knowledge and understanding of the characteristics and behavior of all relevant power system components under all conditions, including under transients, are required in order to develop reliable offshore...

  11. An Evaluation of Semiempirical Models for Partitioning Photosynthetically Active Radiation Into Diffuse and Direct Beam Components

    Science.gov (United States)

    Oliphant, Andrew J.; Stoy, Paul C.

    2018-03-01

    Photosynthesis is more efficient under diffuse than direct beam photosynthetically active radiation (PAR) per unit PAR, but diffuse PAR is infrequently measured at research sites. We examine four commonly used semiempirical models (Erbs et al., 1982, https://doi.org/10.1016/0038-092X(82)90302-4; Gu et al., 1999, https://doi.org/10.1029/1999JD901068; Roderick, 1999, https://doi.org/10.1016/S0168-1923(99)00028-3; Weiss & Norman, 1985, https://doi.org/10.1016/0168-1923(85)90020-6) that partition PAR into diffuse and direct beam components based on the negative relationship between atmospheric transparency and scattering of PAR. Radiation observations at 58 sites (140 site years) from the La Thuille FLUXNET data set were used for model validation and coefficient testing. All four models did a reasonable job of predicting the diffuse fraction of PAR (ϕ) at the 30 min timescale, with site median r2 values ranging between 0.85 and 0.87, model efficiency coefficients (MECs) between 0.62 and 0.69, and regression slopes within 10% of unity. Model residuals were not strongly correlated with astronomical or standard meteorological variables. We conclude that the Roderick (1999, https://doi.org/10.1016/S0168-1923(99)00028-3) and Gu et al. (1999, https://doi.org/10.1029/1999JD901068) models performed better overall than the two older models. Using the basic form of these models, the data set was used to find both individual site and universal model coefficients that optimized predictive accuracy. A new universal form of the model is presented in section 5 that increased site median MEC to 0.73. Site-specific model coefficients increased median MEC further to 0.78, indicating usefulness of local/regional training of coefficients to capture the local distributions of aerosols and cloud types.

  12. Simulated lumbar minimally invasive surgery educational model with didactic and technical components.

    Science.gov (United States)

    Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James

    2013-10-01

    The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.

  13. Design of aseismic class components: measurement of frequency parameters and optimization of analytical models

    International Nuclear Information System (INIS)

    Panet, M.; Delmas, J.; Ballester, J.L.

    1993-04-01

    In each plant unit, there are about 250 earthquake-qualified safety related valves. Justifying their aseismic capacity has proved complex. The structures are so diversified that it is not easy for designers to determine a generic model. Generally speaking, the models tend to overestimate the resonance frequencies. An approach more representative of the actual structure of the component was consequently sought, on which qualification of technological options with respect to the safety authorities would be based, thereby optimizing vibrating table qualification test schedules. The paper describes application of the approximate spectral identification method from the OPTDIM system, which determines basic structure modal data to forecast the approximate eigenfrequencies of a sub-domain, materialized by the component. It is used for a posteriori justification of topworks in operating equipment (900 MWe series), with respect to the 33 Hz ≤ f condition, which guarantees zero amplification of seismic induced internal loads. In the seismic design context and supplementing the preliminary eigenfrequency studies, inverse method solution techniques are used to define the most representative model of the modal behaviour of an electrically controlled motor-operated valve. (authors). 6 figs., 6 tabs., 11 refs

  14. A multi-component oil spill model for calculation of evaporation and dissolution of condensate

    International Nuclear Information System (INIS)

    Rye, H.

    1994-01-01

    It is sometimes argued that oil spilled on the sea surface will go much faster into evaporation than solution. This statement may not always be true due to effects from wave action. In such cases high concentrations in the water may occur which could be harmful to biologic life below the sea surface. This paper explains a numerical model which simulates the surface spreading of a continuous spill, exposed to currents, wind and wave action. The spill is decomposed into the different constituents present in the spill. The oil or condensate is divided into 20 different classes with increasing carbon number within the interval C4 to C55. Asphalthenes are not included (non-emulgating spill). Within each class, the hydrocarbons are divided further into 5 subsets (n-alcanes, cycloalcanes, aromatics, napthenes and resins). The model then keeps track of what happens to each of the components (evaporation, dissolution, as droplets or remains in the slick) during an actual spill event. The effect of wave action is included by assuming a balance between the downward flux of hydrocarbons caused by the breaking waves, and the upward flux of droplets driven by the boyancy of the droplets. The dissolution and evaporation of the different oil (or spill) components are then computed. The model shows that the evaporation and dissolution may in some cases be competing processes, in particular for the aromatic compounds. The paper outlines the approach chosen, as well as some example results. 16 refs., 2 figs., 4 tabs

  15. A participatory systems approach to modeling social, economic, and ecological components of bioenergy

    International Nuclear Information System (INIS)

    Buchholz, Thomas S.; Volk, Timothy A.; Luzadis, Valerie A.

    2007-01-01

    Availability of and access to useful energy is a crucial factor for maintaining and improving human well-being. Looming scarcities and increasing awareness of environmental, economic, and social impacts of conventional sources of non-renewable energy have focused attention on renewable energy sources, including biomass. The complex interactions of social, economic, and ecological factors among the bioenergy system components of feedstock supply, conversion technology, and energy allocation have been a major obstacle to the broader development of bioenergy systems. For widespread implementation of bioenergy to occur there is a need for an integrated approach to model the social, economic, and ecological interactions associated with bioenergy. Such models can serve as a planning and evaluation tool to help decide when, where, and how bioenergy systems can contribute to development. One approach to integrated modeling is by assessing the sustainability of a bioenergy system. The evolving nature of sustainability can be described by an adaptive systems approach using general systems principles. Discussing these principles reveals that participation of stakeholders in all components of a bioenergy system is a crucial factor for sustainability. Multi-criteria analysis (MCA) is an effective tool to implement this approach. This approach would enable decision-makers to evaluate bioenergy systems for sustainability in a participatory, transparent, timely, and informed manner

  16. The reduced kinome of Ostreococcus tauri: core eukaryotic signalling components in a tractable model species.

    Science.gov (United States)

    Hindle, Matthew M; Martin, Sarah F; Noordally, Zeenat B; van Ooijen, Gerben; Barrios-Llerena, Martin E; Simpson, T Ian; Le Bihan, Thierry; Millar, Andrew J

    2014-08-02

    The current knowledge of eukaryote signalling originates from phenotypically diverse organisms. There is a pressing need to identify conserved signalling components among eukaryotes, which will lead to the transfer of knowledge across kingdoms. Two useful properties of a eukaryote model for signalling are (1) reduced signalling complexity, and (2) conservation of signalling components. The alga Ostreococcus tauri is described as the smallest free-living eukaryote. With less than 8,000 genes, it represents a highly constrained genomic palette. Our survey revealed 133 protein kinases and 34 protein phosphatases (1.7% and 0.4% of the proteome). We conducted phosphoproteomic experiments and constructed domain structures and phylogenies for the catalytic protein-kinases. For each of the major kinases families we review the completeness and divergence of O. tauri representatives in comparison to the well-studied kinomes of the laboratory models Arabidopsis thaliana and Saccharomyces cerevisiae, and of Homo sapiens. Many kinase clades in O. tauri were reduced to a single member, in preference to the loss of family diversity, whereas TKL and ABC1 clades were expanded. We also identified kinases that have been lost in A. thaliana but retained in O. tauri. For three, contrasting eukaryotic pathways - TOR, MAPK, and the circadian clock - we established the subset of conserved components and demonstrate conserved sites of substrate phosphorylation and kinase motifs. We conclude that O. tauri satisfies our two central requirements. Several of its kinases are more closely related to H. sapiens orthologs than S. cerevisiae is to H. sapiens. The greatly reduced kinome of O. tauri is therefore a suitable model for signalling in free-living eukaryotes.

  17. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Directory of Open Access Journals (Sweden)

    Katherine M O'Donnell

    Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and

  18. Correlation inequalities for two-component hypercubic /varreverse arrowphi/4 models

    International Nuclear Information System (INIS)

    Soria, J.L.

    1988-01-01

    A collection of new and already known correlation inequalities is found for a family of two-component hypercubic /varreverse arrowphi/ 4 models, using techniques of duplicated variables, rotated correlation inequalities, and random walk representation. Among the interesting new inequalities are: rotated very special Dunlop-Newman inequality 2 ; /varreverse arrowphi//sub 1z/ 2 + /varreverse arrowphi//sub 2z/ 2 ≥ 0, rotated Griffiths I inequality 2 - /varreverse arrowphi//sub 2z/ 2 > ≥ 0, and anti-Lebowitz inequality u 4 1111 ≥ 0

  19. Dynamic models of reduced order of main components of a M SR

    International Nuclear Information System (INIS)

    Garcia B, F. B.; Morales S, J. B.; Polo L, M. A.; Espinosa P, G.

    2011-11-01

    The reactors of melted salts called Molten Salt Fast Reactor (MSFR), have seen a resurgence of interest in the last decade. This design is one of the six proposed for the IV generation reactors. The most active development was in the middle of 1950 and principles of 1970 in the Oak Ridge National Laboratories (ORNL). In this work the mathematician modeling of the main components in the primary and secondary circuits of a M SR is presented. In particular the dynamics of the heat exchanger is analyzed and they are considered several materials to optimize the system thermodynamically. (Author)

  20. Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models

    OpenAIRE

    Wang, Quan

    2012-01-01

    Principal component analysis (PCA) is a popular tool for linear dimensionality reduction and feature extraction. Kernel PCA is the nonlinear form of PCA, which better exploits the complicated spatial structure of high-dimensional features. In this paper, we first review the basic ideas of PCA and kernel PCA. Then we focus on the reconstruction of pre-images for kernel PCA. We also give an introduction on how PCA is used in active shape models (ASMs), and discuss how kernel PCA can be applied ...

  1. Field-theoretic model of Harari's two component phenomenological theory of high energy hadron scattering

    International Nuclear Information System (INIS)

    Dymski, T.C.

    1976-01-01

    For high energy scattering of pseudoscalar particles on spin 1 / 2 particles, the transition amplitude (for a given signature) is constructed as an infinite sum over spin of boson exchange graphs of the Feynman type, each of which has impact parameters up to some value R completely removed. This amplitude is advanced as a field theoretic realization of the nondiffractive component of Harari's dual absorption model. Comparing with π/sup +-/p→π/sup +-/p and π - p→π 0 n data shows that the imaginary parts of both helicity amplitudes are excellent, for either signature

  2. Atmospheric Constituents in GEOS-5: Components for an Earth System Model

    Science.gov (United States)

    Pawson, Steven; Douglass, Anne; Duncan, Bryan; Nielsen, Eric; Ott, Leslie; Strode, Sarah

    2011-01-01

    The GEOS-S model is being developed for weather and climate processes, including the implementation of "Earth System" components. While the stratospheric chemistry capabilities are mature, we are presently extending this to include predictions of the tropospheric composition and chemistry - this includes CO2, CH4, CO, nitrogen species, etc. (Aerosols are also implemented, but are beyond the scope of this paper.) This work will give an overview of our chemistry modules, the approaches taken to represent surface emissions and uptake of chemical species, and some studies of the sensitivity of the atmospheric circulation to changes in atmospheric composition. Results are obtained through focused experiments and multi-decadal simulations.

  3. Fabrication of nuclear ship reactor MRX model and study on inspection and maintenance of components

    International Nuclear Information System (INIS)

    Kasahara, Yoshiyuki; Nakazawa, Toshio; Kusunoki, Tsuyoshi; Takahashi, Hiroki; Yoritsune, Tsutomu.

    1997-10-01

    The MRX (Marine Reactor X) is an integral type small reactor adopting passive safety systems. As for an integral type reactor, primary system components are installed in the reactor vessel. It is therefor important to establish the appropriate procedure for construction, inspection and maintenance, dismauntling, etc., for all components in the reactor vessel as well as in the reactor containment, because inspection space is limited. To study these subjects, a one-fifth model of the MRX was fabricated and operation capabilities were studied. As a result of studies, the following results are obtained. (1) Manufacturing and installing problems of the reactor pressure vessel, the containment vessel and internal components are basically not abserved. (2) Heat transfer tube structures of the steam generator and the heat exchangers of emergency decay heat removal system and containment water cooler were not seen of any problem for fabrication. However, due consideration is required in the detailed design of supports of heat transfer tubes. (3) Further studies should be needed for designs of flange penetrations and leak countermeasures for pipes instrument cables. (4) Arrangements of equipments in the containment should be taken in consideration in detail because the space is narrow. (5) Further discussion is required for installation methods of instruments and cables. (author)

  4. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano

    2017-01-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  5. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: raissaomarques@gmail.com, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  6. CONCEPT AND MODELS FOR EVALUATION OF BLACK AND WHITE SMOKE COMPONENTS IN DIESEL ENGINE EXHAUST

    Directory of Open Access Journals (Sweden)

    Igor BLYANKINSHTEIN

    2017-09-01

    Full Text Available A method for measuring exhaust smoke opacity has been developed, which allows estimating the differentiated components forming black exhaust and those forming white smoke. The method is based on video recording and special software for processing the video recording data. The flow of the diesel exhaust gas is visualised using the digital camera, against the background of the screen, on a cut of an exhaust pipe, and with sufficient illumination of the area. The screen represents standards of whiteness and blackness. The content of the black components (soot is determined by the degree of blackening of the white standard in the frames of the video, and the content of whitish components (unburned fuel and oil, etc. is determined by the degree of whitening of black standard on the frames of the video. The paper describes the principle and the results of testing the proposed method of measuring exhaust smoke opacity. We present an algorithm for the frame-by-frame analysis of the video sequence, and static and dynamic mathematical models of exhaust opacity, measured under free-acceleration of a diesel engine.

  7. THM modelling of buffer, backfill and other system components. Critical processes and scenarios

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Kristensson, Ola; Boergesson, Lennart; Dueck, Ann; Hernelind, Jan

    2010-03-01

    A number of critical thermo-hydro-mechanical processes and scenarios for the buffer, tunnel backfill and other filling components in the repository have been identified. These processes and scenarios representing different aspects of the repository evolution have been pinpointed and modelled. In total, 22 cases have been modelled. Most cases have been analysed with finite element (FE) calculations, using primarily the two codes Abaqus and Code B right. For some cases analytical methods have been used either to supplement the FE calculations or due to that the scenario has a character that makes it unsuitable or very difficult to use the FE method. Material models and element models and choice of parameters as well as presumptions have been stated for all modelling cases. In addition, the results have been analysed and conclusions drawn for each case. The uncertainties have also been analysed. Besides the information given for all cases studied, the codes and material models have been described in a separate so called data report

  8. THM modelling of buffer, backfill and other system components. Critical processes and scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Kristensson, Ola; Boergesson, Lennart; Dueck, Ann (Clay Technology AB, Lund (Sweden)); Hernelind, Jan (5T-Engineering AB, Vaesteraas (Sweden))

    2010-03-15

    A number of critical thermo-hydro-mechanical processes and scenarios for the buffer, tunnel backfill and other filling components in the repository have been identified. These processes and scenarios representing different aspects of the repository evolution have been pinpointed and modelled. In total, 22 cases have been modelled. Most cases have been analysed with finite element (FE) calculations, using primarily the two codes Abaqus and Code-Bright. For some cases analytical methods have been used either to supplement the FE calculations or due to that the scenario has a character that makes it unsuitable or very difficult to use the FE method. Material models and element models and choice of parameters as well as presumptions have been stated for all modelling cases. In addition, the results have been analysed and conclusions drawn for each case. The uncertainties have also been analysed. Besides the information given for all cases studied, the codes and material models have been described in a separate so called data report

  9. A finite element method based microwave heat transfer modeling of frozen multi-component foods

    Science.gov (United States)

    Pitchai, Krishnamoorthy

    Microwave heating is fast and convenient, but is highly non-uniform. Non-uniform heating in microwave cooking affects not only food quality but also food safety. Most food industries develop microwavable food products based on "cook-and-look" approach. This approach is time-consuming, labor intensive and expensive and may not result in optimal food product design that assures food safety and quality. Design of microwavable food can be realized through a simulation model which describes the physical mechanisms of microwave heating in mathematical expressions. The objective of this study was to develop a microwave heat transfer model to predict spatial and temporal profiles of various heterogeneous foods such as multi-component meal (chicken nuggets and mashed potato), multi-component and multi-layered meal (lasagna), and multi-layered food with active packages (pizza) during microwave heating. A microwave heat transfer model was developed by solving electromagnetic and heat transfer equations using finite element method in commercially available COMSOL Multiphysics v4.4 software. The microwave heat transfer model included detailed geometry of the cavity, phase change, and rotation of the food on the turntable. The predicted spatial surface temperature patterns and temporal profiles were validated against the experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The predicted spatial surface temperature profile of different multi-component foods was in good agreement with the corresponding experimental profiles in terms of hot and cold spot patterns. The root mean square error values of temporal profiles ranged from 5.8 °C to 26.2 °C in chicken nuggets as compared 4.3 °C to 4.7 °C in mashed potatoes. In frozen lasagna, root mean square error values at six locations ranged from 6.6 °C to 20.0 °C for 6 min of heating. A microwave heat transfer model was developed to include susceptor assisted microwave heating of a

  10. Dose rates modeling of pressurized water reactor primary loop components with SCALE6.0

    International Nuclear Information System (INIS)

    Matijević, Mario; Pevec, Dubravko; Trontl, Krešimir

    2015-01-01

    Highlights: • Shielding analysis of the typical PWR primary loop components was performed. • FW-CADIS methodology was thoroughly investigated using SCALE6.0 code package. • Versatile ability of SCALE6.0/FW-CADIS for deep penetration models was proved. • The adjoint source with focus on specific material can improve MC modeling. - Abstract: The SCALE6.0 simulation model of a typical PWR primary loop components for effective dose rates calculation based on hybrid deterministic–stochastic methodology was created. The criticality sequence CSAS6/KENO-VI of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, was used for criticality calculations, while neutron and gamma dose rates distributions were determined by MAVRIC/Monaco shielding sequence. A detailed model of a combinatorial geometry, materials and characteristics of a generic two loop PWR facility is based on best available input data. The sources of ionizing radiation in PWR primary loop components included neutrons and photons originating from critical core and photons from activated coolant in two primary loops. Detailed calculations of the reactor pressure vessel and the upper reactor head have been performed. The efficiency of particle transport for obtaining global Monte Carlo dose rates was further examined and quantified with a flexible adjoint source positioning in phase-space. It was demonstrated that generation of an accurate importance map (VR parameters) is a paramount step which enabled obtaining Monaco dose rates with fairly uniform uncertainties. Computer memory consumption by the S N part of hybrid methodology represents main obstacle when using meshes with large number of cells together with high S N /P N parameters. Detailed voxelization (homogenization) process in Denovo together with high S N /P N parameters is essential for precise VR parameters generation which will result in optimized MC distributions. Shielding calculations were also performed for the reduced PWR

  11. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    Science.gov (United States)

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  12. Anomalous NMR Relaxation in Cartilage Matrix Components and Native Cartilage: Fractional-Order Models

    Science.gov (United States)

    Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-01-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095

  13. Modeling the variability of solar radiation data among weather stations by means of principal components analysis

    International Nuclear Information System (INIS)

    Zarzo, Manuel; Marti, Pau

    2011-01-01

    Research highlights: →Principal components analysis was applied to R s data recorded at 30 stations. → Four principal components explain 97% of the data variability. → The latent variables can be fitted according to latitude, longitude and altitude. → The PCA approach is more effective for gap infilling than conventional approaches. → The proposed method allows daily R s estimations at locations in the area of study. - Abstract: Measurements of global terrestrial solar radiation (R s ) are commonly recorded in meteorological stations. Daily variability of R s has to be taken into account for the design of photovoltaic systems and energy efficient buildings. Principal components analysis (PCA) was applied to R s data recorded at 30 stations in the Mediterranean coast of Spain. Due to equipment failures and site operation problems, time series of R s often present data gaps or discontinuities. The PCA approach copes with this problem and allows estimation of present and past values by taking advantage of R s records from nearby stations. The gap infilling performance of this methodology is compared with neural networks and alternative conventional approaches. Four principal components explain 66% of the data variability with respect to the average trajectory (97% if non-centered values are considered). A new method based on principal components regression was also developed for R s estimation if previous measurements are not available. By means of multiple linear regression, it was found that the latent variables associated to the four relevant principal components can be fitted according to the latitude, longitude and altitude of the station where data were recorded from. Additional geographical or climatic variables did not increase the predictive goodness-of-fit. The resulting models allow the estimation of daily R s values at any location in the area under study and present higher accuracy than artificial neural networks and some conventional approaches

  14. A summary of recent refinements to the WAKE dispersion model, a component of the HGSYSTEM/UF6 model suite

    International Nuclear Information System (INIS)

    Yambert, M.W.; Lombardi, D.A.; Goode, W.D. Jr.; Bloom, S.G.

    1998-08-01

    The original WAKE dispersion model a component of the HGSYSTEM/UF 6 model suite, is based on Shell Research Ltd.'s HGSYSTEM Version 3.0 and was developed by the US Department of Energy for use in estimating downwind dispersion of materials due to accidental releases from gaseous diffusion plant (GDP) process buildings. The model is applicable to scenarios involving both ground-level and elevated releases into building wake cavities of non-reactive plumes that are either neutrally or positively buoyant. Over the 2-year period since its creation, the WAKE model has been used to perform consequence analyses for Safety Analysis Reports (SARs) associated with gaseous diffusion plants in Portsmouth (PORTS), Paducah (PGDP), and Oak Ridge. These applications have identified the need for additional model capabilities (such as the treatment of complex terrain and time-variant releases) not present in the original utilities which, in turn, has resulted in numerous modifications to these codes as well as the development of additional, stand-alone postprocessing utilities. Consequently, application of the model has become increasingly complex as the number of executable, input, and output files associated with a single model run has steadily grown. In response to these problems, a streamlined version of the WAKE model has been developed which integrates all calculations that are currently performed by the existing WAKE, and the various post-processing utilities. This report summarizes the efforts involved in developing this revised version of the WAKE model

  15. Evaluation of nuclear power plant component failure probability and core damage probability using simplified PSA model

    International Nuclear Information System (INIS)

    Shimada, Yoshio

    2000-01-01

    It is anticipated that the change of frequency of surveillance tests, preventive maintenance or parts replacement of safety related components may cause the change of component failure probability and result in the change of core damage probability. It is also anticipated that the change is different depending on the initiating event frequency or the component types. This study assessed the change of core damage probability using simplified PSA model capable of calculating core damage probability in a short time period, which is developed by the US NRC to process accident sequence precursors, when various component's failure probability is changed between 0 and 1, or Japanese or American initiating event frequency data are used. As a result of the analysis, (1) It was clarified that frequency of surveillance test, preventive maintenance or parts replacement of motor driven pumps (high pressure injection pumps, residual heat removal pumps, auxiliary feedwater pumps) should be carefully changed, since the core damage probability's change is large, when the base failure probability changes toward increasing direction. (2) Core damage probability change is insensitive to surveillance test frequency change, since the core damage probability change is small, when motor operated valves and turbine driven auxiliary feed water pump failure probability changes around one figure. (3) Core damage probability change is small, when Japanese failure probability data are applied to emergency diesel generator, even if failure probability changes one figure from the base value. On the other hand, when American failure probability data is applied, core damage probability increase is large, even if failure probability changes toward increasing direction. Therefore, when Japanese failure probability data is applied, core damage probability change is insensitive to surveillance tests frequency change etc. (author)

  16. Precision modelling of M dwarf stars: the magnetic components of CM Draconis

    Science.gov (United States)

    MacDonald, J.; Mullan, D. J.

    2012-04-01

    The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4

  17. Modelling and computer simulation for the manufacture by powder HIPing of Blanket Shield components for ITER

    International Nuclear Information System (INIS)

    Gillia, O.; Bucci, Ph.; Vidotto, F.; Leibold, J.-M.; Boireau, B.; Boudot, C.; Cottin, A.; Lorenzetto, P.; Jacquinot, F.

    2006-01-01

    In components of blanket modules for ITER, intricate cooling networks are needed in order to evacuate all heat coming from the plasma. Hot Isostatic Pressing (HIPing) technology is a very convenient method to produce near net shape components with complex cooling network through massive stainless steel parts by bonding together tubes inserted in grooves machined in bulk stainless steel. Powder is often included in the process so as to release difficulties arising with gaps closure between tube and solid part or between several solid parts. In the mean time, it releases the machining precision needed on the parts to assemble before HIP. However, inserting powder in the assembly means densification, i.e. volume change of powder during the HIP cycle. This leads to global and local shape changes of HIPed parts. In order to control the deformations, modelling and computer simulation are used. This modelling and computer simulation work has been done in support to the fabrication of a shield prototype for the ITER blanket. Problems such as global bending of the whole part and deformations of tubes in their powder bed are addressed. It is important that the part does not bend too much. It is important as well to have circular tube shape after HIP, firstly in order to avoid their rupture during HIP but also because non destructive ultrasonic examination is needed to check the quality of the densification and bonding between tube and powder or solid parts; the insertions of a probe in the tubes requires a minimal circular tube shape. For simulation purposes, the behaviour of the different materials has to be modelled. Although the modelling of the massive stainless steel behaviour is not neglected, the most critical modelling is about power. For this study, a thorough investigation on the powder behaviour has been performed with some in-situ HIP dilatometry experiments and some interrupted HIP cycles on trial parts. These experiments have allowed the identification of a

  18. Application of Tank Model for Predicting Water Balance and Flow Discharge Components of Cisadane Upper Catchment

    Directory of Open Access Journals (Sweden)

    Nana Mulyana Arifjaya

    2012-01-01

    Full Text Available The concept of hydrological tank model was well described into four compartments (tanks. The first tank (tank A comprised of one vertical (qA0 and two lateral (qA1 and qA2 water flow components and tank B comprised of one vertical (qB0 and one lateral (qB1 water flow components. Tank C comprised of one vertical (qC0 and one lateral (qC1 water flow components, whereas tank D comprised of one lateral water flow component (qD1.  These vertical water flows would also contribute to the depletion of water flow in the related tanks but would replenish tanks in the deeper layers. It was assumed that at all lateral water flow components would finally accumulate in one stream, summing-up of the lateral water flow, much or less, should be equal to the water discharge (Qo at specified time concerns. Tank A received precipitation (R and evapo-transpiration (ET which was its gradientof (R-ET over time would become the driving force for the changes of water stored in the soil profiles and thosewater flows leaving the soil layer.  Thus tank model could describe th vertical and horizontal water flow withinthe watershed. The research site was Cisadane Upper Catchment, located at Pasir Buncir Village of CaringinSub-District within the Regency of Bogor in West Java Province.  The elevations ranged 512 –2,235 m above sealevel, with a total drainage area of 1,811.5 ha and total length of main stream of 14,340.7 m.  The land cover wasdominated by  forest  with a total of 1,044.6 ha (57.67%,  upland agriculture with a total of 477.96 ha (26.38%,mixed garden with a total of 92.85 ha(5.13% and semitechnical irigated rice field with a total of 196.09 ha (10,8%.  The soil was classified as hydraquent (96.6% and distropept (3.4%.  Based on the calibration of tank model application in the study area, the resulting coefficient of determination (R2 was 0.72 with model efficiency (NSEof= 0.75, thus tank model could well illustrate the water flow distribution of

  19. End-to-end network models encompassing terrestrial, wireless, and satellite components

    Science.gov (United States)

    Boyarko, Chandler L.; Britton, John S.; Flores, Phil E.; Lambert, Charles B.; Pendzick, John M.; Ryan, Christopher M.; Shankman, Gordon L.; Williams, Ramon P.

    2004-08-01

    Development of network models that reflect true end-to-end architectures such as the Transformational Communications Architecture need to encompass terrestrial, wireless and satellite component to truly represent all of the complexities in a world wide communications network. Use of best-in-class tools including OPNET, Satellite Tool Kit (STK), Popkin System Architect and their well known XML-friendly definitions, such as OPNET Modeler's Data Type Description (DTD), or socket-based data transfer modules, such as STK/Connect, enable the sharing of data between applications for more rapid development of end-to-end system architectures and a more complete system design. By sharing the results of and integrating best-in-class tools we are able to (1) promote sharing of data, (2) enhance the fidelity of our results and (3) allow network and application performance to be viewed in the context of the entire enterprise and its processes.

  20. The construction of life prediction models for the design of Stirling engine heater components

    Science.gov (United States)

    Petrovich, A.; Bright, A.; Cronin, M.; Arnold, S.

    1983-01-01

    The service life of Stirling-engine heater structures of Fe-based high-temperature alloys is predicted using a numerical model based on a linear-damage approach and published test data (engine test data for a Co-based alloy and tensile-test results for both the Co-based and the Fe-based alloys). The operating principle of the automotive Stirling engine is reviewed; the economic and technical factors affecting the choice of heater material are surveyed; the test results are summarized in tables and graphs; the engine environment and automotive duty cycle are characterized; and the modeling procedure is explained. It is found that the statistical scatter of the fatigue properties of the heater components needs to be reduced (by decreasing the porosity of the cast material or employing wrought material in fatigue-prone locations) before the accuracy of life predictions can be improved.

  1. Two-component model application for error calculus in the environmental monitoring data analysis

    International Nuclear Information System (INIS)

    Carvalho, Maria Angelica G.; Hiromoto, Goro

    2002-01-01

    Analysis and interpretation of results of an environmental monitoring program is often based on the evaluation of the mean value of a particular set of data, which is strongly affected by the analytical errors associated with each measurement. A model proposed by Rocke and Lorenzato assumes two error components, one additive and one multiplicative, to deal with lower and higher concentration values in a single model. In this communication, an application of this method for re-evaluation of the errors reported in a large set of results of total alpha measurements in a environmental sample is presented. The results show that the mean values calculated taking into account the new errors is higher than as obtained with the original errors, being an indicative that the analytical errors reported before were underestimated in the region of lower concentrations. (author)

  2. Combustion engine diagnosis model-based condition monitoring of gasoline and diesel engines and their components

    CERN Document Server

    Isermann, Rolf

    2017-01-01

    This book offers first a short introduction to advanced supervision, fault detection and diagnosis methods. It then describes model-based methods of fault detection and diagnosis for the main components of gasoline and diesel engines, such as the intake system, fuel supply, fuel injection, combustion process, turbocharger, exhaust system and exhaust gas aftertreatment. Additionally, model-based fault diagnosis of electrical motors, electric, pneumatic and hydraulic actuators and fault-tolerant systems is treated. In general series production sensors are used. It includes abundant experimental results showing the detection and diagnosis quality of implemented faults. Written for automotive engineers in practice, it is also of interest to graduate students of mechanical and electrical engineering and computer science. The Content Introduction.- I SUPERVISION, FAULT DETECTION AND DIAGNOSIS METHODS.- Supervision, Fault-Detection and Fault-Diagnosis Methods - a short Introduction.- II DIAGNOSIS OF INTERNAL COMBUST...

  3. Theoretical modeling and experimental study on fatigue initiation life of 16MnR notched components

    International Nuclear Information System (INIS)

    Wang Xiaogui; Gao Zengliang; Qiu Baoxiang; Jiang Yanrao

    2010-01-01

    In order to investigate the effects of notch geometry and loading conditions on the fatigue initiation life and fatigue fracture life of 16MnR material, fatigue experiments were conducted for both smooth rod specimens and notched rod specimens. The detailed elastic-plastic stress and strain responses were computed by the finite element software (ABAQUS) incorporating a robust cyclic plasticity model via a user subroutine UMAT. The obtained stresses and strains were applied to the multiaxial fatigue damage criterion to compute the fatigue damage induced by a loading cycle on the critical material plane. The fatigue initiation life was then obtained by the proposed theoretical model. The well agreement between the predicted results and the experiment data indicated that the fatigue initiation of notched components in the multiaxial stress state related to all the nonzero stress and strain quantities. (authors)

  4. Multi-Trait analysis of growth traits: fitting reduced rank models using principal components for Simmental beef cattle

    Directory of Open Access Journals (Sweden)

    Rodrigo Reis Mota

    2016-09-01

    Full Text Available ABSTRACT: The aim of this research was to evaluate the dimensional reduction of additive direct genetic covariance matrices in genetic evaluations of growth traits (range 100-730 days in Simmental cattle using principal components, as well as to estimate (covariance components and genetic parameters. Principal component analyses were conducted for five different models-one full and four reduced-rank models. Models were compared using Akaike information (AIC and Bayesian information (BIC criteria. Variance components and genetic parameters were estimated by restricted maximum likelihood (REML. The AIC and BIC values were similar among models. This indicated that parsimonious models could be used in genetic evaluations in Simmental cattle. The first principal component explained more than 96% of total variance in both models. Heritability estimates were higher for advanced ages and varied from 0.05 (100 days to 0.30 (730 days. Genetic correlation estimates were similar in both models regardless of magnitude and number of principal components. The first principal component was sufficient to explain almost all genetic variance. Furthermore, genetic parameter similarities and lower computational requirements allowed for parsimonious models in genetic evaluations of growth traits in Simmental cattle.

  5. Discrete kink dynamics in hydrogen-bonded chains: The two-component model

    DEFF Research Database (Denmark)

    Karpan, V.M.; Zolotaryuk, Yaroslav; Christiansen, Peter Leth

    2004-01-01

    We study discrete topological solitary waves (kinks and antikinks) in two nonlinear diatomic chain models that describe the collective dynamics of proton transfers in one-dimensional hydrogen-bonded networks. The essential ingredients of the models are (i) a realistic (anharmonic) ion-proton inte......We study discrete topological solitary waves (kinks and antikinks) in two nonlinear diatomic chain models that describe the collective dynamics of proton transfers in one-dimensional hydrogen-bonded networks. The essential ingredients of the models are (i) a realistic (anharmonic) ion...... chain subject to a substrate with two optical bands), both providing a bistability of the hydrogen-bonded proton. Exact two-component (kink and antikink) discrete solutions for these models are found numerically. We compare the soliton solutions and their properties in both the one- (when the heavy ions...... principal differences, like a significant difference in the stability switchings behavior for the kinks and the antikinks. Water-filled carbon nanotubes are briefly discussed as possible realistic systems, where topological discrete (anti)kink states might exist....

  6. [Quantitative models between canopy hyperspectrum and its component features at apple tree prosperous fruit stage].

    Science.gov (United States)

    Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang

    2010-10-01

    Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.

  7. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    Science.gov (United States)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is

  8. Modelling the mid-Pliocene Warm Period climate with the IPSL coupled model and its atmospheric component LMDZ5A

    Directory of Open Access Journals (Sweden)

    C. Contoux

    2012-06-01

    Full Text Available This paper describes the experimental design and model results of the climate simulations of the mid-Pliocene Warm Period (mPWP, ca. 3.3–3 Ma using the Institut Pierre Simon Laplace model (IPSLCM5A, in the framework of the Pliocene Model Intercomparison Project (PlioMIP. We use the IPSL atmosphere ocean general circulation model (AOGCM, and its atmospheric component alone (AGCM, to simulate the climate of the mPWP. Boundary conditions such as sea surface temperatures (SSTs, topography, ice-sheet extent and vegetation are derived from the ones imposed by the Pliocene Model Intercomparison Project (PlioMIP, described in Haywood et al. (2010, 2011. We first describe the IPSL model main features, and then give a full description of the boundary conditions used for atmospheric model and coupled model experiments. The climatic outputs of the mPWP simulations are detailed and compared to the corresponding control simulations. The simulated warming relative to the control simulation is 1.94 °C in the atmospheric and 2.07 °C in the coupled model experiments. In both experiments, warming is larger at high latitudes. Mechanisms governing the simulated precipitation patterns are different in the coupled model than in the atmospheric model alone, because of the reduced gradients in imposed SSTs, which impacts the Hadley and Walker circulations. In addition, a sensitivity test to the change of land-sea mask in the atmospheric model, representing a sea-level change from present-day to 25 m higher during the mid-Pliocene, is described. We find that surface temperature differences can be large (several degrees Celsius but are restricted to the areas that were changed from ocean to land or vice versa. In terms of precipitation, impact on polar regions is minor although the change in land-sea mask is significant in these areas.

  9. Real time damage detection using recursive principal components and time varying auto-regressive modeling

    Science.gov (United States)

    Krishnan, M.; Bhowmik, B.; Hazra, B.; Pakrashi, V.

    2018-02-01

    In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using Recursive Principal Component Analysis (RPCA) in conjunction with Time Varying Auto-Regressive Modeling (TVAR) is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal components online using rank-one perturbation method, followed by TVAR modeling of the first transformed response, to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/non-linear-states that indicate damage. Most of the works available in the literature deal with algorithms that require windowing of the gathered data owing to their data-driven nature which renders them ineffective for online implementation. Algorithms focussed on mathematically consistent recursive techniques in a rigorous theoretical framework of structural damage detection is missing, which motivates the development of the present framework that is amenable for online implementation which could be utilized along with suite experimental and numerical investigations. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. TVAR modeling on the principal component explaining maximum variance is utilized and the damage is identified by tracking the TVAR coefficients. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data without requiring any baseline data. Numerical simulations performed on a 5-dof nonlinear system under white noise excitation and El Centro (also known as 1940 Imperial Valley earthquake) excitation, for different damage scenarios, demonstrate the robustness of the proposed algorithm. The method is further validated on results obtained from case studies involving

  10. Implementation of wall film condensation model to two-fluid model in component thermal hydraulic analysis code CUPID - 15237

    International Nuclear Information System (INIS)

    Lee, J.H.; Park, G.C.; Cho, H.K.

    2015-01-01

    In the containment of a nuclear reactor, the wall condensation occurs when containment cooling system and structures remove the mass and energy release and this phenomenon is of great importance to ensure containment integrity. If the phenomenon occurs in the presence of non-condensable gases, their accumulation near the condensate film leads to significant reduction in heat transfer during the condensation. This study aims at simulating the wall film condensation in the presence of non-condensable gas using CUPID, a computational multi-fluid dynamics code, which is developed by the Korea Atomic Energy Research Institute (KAERI) for the analysis of transient two-phase flows in nuclear reactor components. In order to simulate the wall film condensation in containment, the code requires a proper wall condensation model and liquid film model applicable to the analysis of the large scale system. In the present study, the liquid film model and wall film condensation model were implemented in the two-fluid model of CUPID. For the condensation simulation, a wall function approach with heat and mass transfer analogy was applied in order to save computational time without considerable refinement for the boundary layer. This paper presents the implemented wall film condensation model and then, introduces the simulation result using CUPID with the model for a conceptual condensation problem in a large system. (authors)

  11. A three-component analytic model of long-term climate change

    Science.gov (United States)

    Pratt, V. R.

    2011-12-01

    On the premise that fast climate fluctuations up to and including the 11-year solar cycle play a negligible role in long-term climate forecasting, we remove these from the 160-year HADCRUT3 global land-sea temperature record and model the result as the sum of a log-raised-exponential (log(b+exp(t))) and two sine waves of respective periods 56 and 75 years coinciding in phase in 1925. The latter two can be understood equivalently as a 62-year-period "carrier" modulated with a 440-year period that peaked in 1925 and vanished in 1705. This model gives an excellent fit, explaining 98% of the variance (r^2) of long-term climate over the 160 years. We derive the first component as the composition of Arrhenius's 1896 logarithmic dependence of surface temperature on CO2 with Hofmann's 2009 raised-exponential dependence of CO2 on time, but interpret its fit to the data as the net anthropogenic contribution incorporating all greenhouse and aerosol emissions and relevant feedbacks, bearing in mind the rapid growth in both population and technology. The 56-year oscillation matches the largest component of the Atlantic Multidecadal Oscillation, while the 75-year one is near an oscillation often judged to be in the vicinity of 70 years. The expected 1705 cancellation is about two decades earlier than suggested by Gray et al's tree-ring proxy for the AMO during 1567-1990 [Gray GPL 31, L12205]. While there is no consensus on the origin of ocean oscillations, the oscillations in geomagnetic secular variation noted by Nagata and Rimitake in 1963 and Slaucitajs and Winch in 1965, of respective periods 77 years and 61 years, correspond strikingly with our 76-year oscillation and 62-year "carrier." This model has a number of benefits. Simplicity. It is easily explained to a lay audience in response to the frequently voiced concern that the temperature record is poorly correlated with the CO2 record alone. It shows that the transition from natural to anthropogenic influences on long

  12. Principal component analysis acceleration of rovibrational coarse-grain models for internal energy excitation and dissociation

    Science.gov (United States)

    Bellemans, Aurélie; Parente, Alessandro; Magin, Thierry

    2018-04-01

    The present work introduces a novel approach for obtaining reduced chemistry representations of large kinetic mechanisms in strong non-equilibrium conditions. The need for accurate reduced-order models arises from compression of large ab initio quantum chemistry databases for their use in fluid codes. The method presented in this paper builds on existing physics-based strategies and proposes a new approach based on the combination of a simple coarse grain model with Principal Component Analysis (PCA). The internal energy levels of the chemical species are regrouped in distinct energy groups with a uniform lumping technique. Following the philosophy of machine learning, PCA is applied on the training data provided by the coarse grain model to find an optimally reduced representation of the full kinetic mechanism. Compared to recently published complex lumping strategies, no expert judgment is required before the application of PCA. In this work, we will demonstrate the benefits of the combined approach, stressing its simplicity, reliability, and accuracy. The technique is demonstrated by reducing the complex quantum N2(g+1Σ) -N(S4u ) database for studying molecular dissociation and excitation in strong non-equilibrium. Starting from detailed kinetics, an accurate reduced model is developed and used to study non-equilibrium properties of the N2(g+1Σ) -N(S4u ) system in shock relaxation simulations.

  13. Towards the generation of a parametric foot model using principal component analysis: A pilot study.

    Science.gov (United States)

    Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan

    2016-06-01

    There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Day-Ahead Crude Oil Price Forecasting Using a Novel Morphological Component Analysis Based Model

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2014-01-01

    Full Text Available As a typical nonlinear and dynamic system, the crude oil price movement is difficult to predict and its accurate forecasting remains the subject of intense research activity. Recent empirical evidence suggests that the multiscale data characteristics in the price movement are another important stylized fact. The incorporation of mixture of data characteristics in the time scale domain during the modelling process can lead to significant performance improvement. This paper proposes a novel morphological component analysis based hybrid methodology for modeling the multiscale heterogeneous characteristics of the price movement in the crude oil markets. Empirical studies in two representative benchmark crude oil markets reveal the existence of multiscale heterogeneous microdata structure. The significant performance improvement of the proposed algorithm incorporating the heterogeneous data characteristics, against benchmark random walk, ARMA, and SVR models, is also attributed to the innovative methodology proposed to incorporate this important stylized fact during the modelling process. Meanwhile, work in this paper offers additional insights into the heterogeneous market microstructure with economic viable interpretations.

  15. Simulated spinal cerebrospinal fluid leak repair: an educational model with didactic and technical components.

    Science.gov (United States)

    Ghobrial, George M; Anderson, Paul A; Chitale, Rohan; Campbell, Peter G; Lobel, Darlene A; Harrop, James

    2013-10-01

    In the era of surgical resident work hour restrictions, the traditional apprenticeship model may provide fewer hours for neurosurgical residents to hone technical skills. Spinal dura mater closure or repair is 1 skill that is infrequently encountered, and persistent cerebrospinal fluid leaks are a potential morbidity. To establish an educational curriculum to train residents in spinal dura mater closure with a novel durotomy repair model. The Congress of Neurological Surgeons has developed a simulation-based model for durotomy closure with the ongoing efforts of their simulation educational committee. The core curriculum consists of didactic training materials and a technical simulation model of dural repair for the lumbar spine. Didactic pretest scores ranged from 4/11 (36%) to 10/11 (91%). Posttest scores ranged from 8/11 (73%) to 11/11 (100%). Overall, didactic improvements were demonstrated by all participants, with a mean improvement between pre- and posttest scores of 1.17 (18.5%; P = .02). The technical component consisted of 11 durotomy closures by 6 participants, where 4 participants performed multiple durotomies. Mean time to closure of the durotomy ranged from 490 to 546 seconds in the first and second closures, respectively (P = .66), whereby the median leak rate improved from 14 to 7 (P = .34). There were also demonstrative technical improvements by all. Simulated spinal dura mater repair appears to be a potentially valuable tool in the education of neurosurgery residents. The combination of a didactic and technical assessment appears to be synergistic in terms of educational development.

  16. 3D Organotypic Culture Model to Study Components of ERK Signaling.

    Science.gov (United States)

    Chioni, Athina-Myrto; Bajwa, Rabia Tayba; Grose, Richard

    2017-01-01

    Organotypic models are 3D in vitro representations of an in vivo environment. Their complexity can range from an epidermal replica to the establishment of a cancer microenvironment. These models have been used for many years, in an attempt to mimic the structure and function of cells and tissues found inside the body. Methods for developing 3D organotypic models differ according to the tissue of interest and the experimental design. For example, cultures may be grown submerged in culture medium and or at an air-liquid interface. Our group is focusing on an air-liquid interface 3D organotypic model. These cultures are grown on a nylon membrane-covered metal grid with the cells embedded in a Collagen-Matrigel gel. This allows cells to grow in an air-liquid interface to enable diffusion and nourishment from the medium below. Subsequently, the organotypic cultures can be used for immunohistochemical staining of various components of ERK signaling, which is a key player in mediating communication between cells and their microenvironment.

  17. A multi-component and multi-failure mode inspection model based on the delay time concept

    International Nuclear Information System (INIS)

    Wang Wenbin; Banjevic, Dragan; Pecht, Michael

    2010-01-01

    The delay time concept and the techniques developed for modelling and optimising plant inspection practices have been reported in many papers and case studies. For a system comprised of many components and subject to many different failure modes, one of the most convenient ways to model the inspection and failure processes is to use a stochastic point process for defect arrivals and a common delay time distribution for the duration between defect the arrival and failure of all defects. This is an approximation, but has been proven to be valid when the number of components is large. However, for a system with just a few key components and subject to few major failure modes, the approximation may be poor. In this paper, a model is developed to address this situation, where each component and failure mode is modelled individually and then pooled together to form the system inspection model. Since inspections are usually scheduled for the whole system rather than individual components, we then formulate the inspection model when the time to the next inspection from the point of a component failure renewal is random. This imposes some complication to the model, and an asymptotic solution was found. Simulation algorithms have also been proposed as a comparison to the analytical results. A numerical example is presented to demonstrate the model.

  18. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  19. A general mixed boundary model reduction method for component mode synthesis

    International Nuclear Information System (INIS)

    Voormeeren, S N; Van der Valk, P L C; Rixen, D J

    2010-01-01

    A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the 'Mixed Craig-Bampton' method is proposed. The method is derived by dividing the substructure DoF into a set of internal DoF, free interface DoF and fixed interface DoF. To this end a simple but effective scheme is introduced that, for every pair of interface DoF, selects a free or fixed boundary condition for each DoF individually. Based on this selection a reduction basis is computed consisting of vibration modes, static constraint modes and static residual flexibility modes. In order to assemble the reduced substructures a novel mixed assembly procedure is developed. It is shown that this approach leads to relatively sparse reduced matrices, whereas other mixed boundary methods often lead to full matrices. As such, the Mixed Craig-Bampton method forms a natural generalization of the classic Craig-Bampton and more recent Dual Craig-Bampton methods. Finally, the method is applied to a finite element test model. Analysis reveals that the proposed method has comparable or better accuracy and superior versatility with respect to the existing methods.

  20. Modelling and design of undercarriage components of large-scale earthmoving equipment in tar sand operations

    Energy Technology Data Exchange (ETDEWEB)

    Szymanski, J.; Frimpong, S.; Sobieski, R. [Alberta Univ., Edmonton, AB (Canada). Centre for Advanced Energy and Minerals Research

    2004-07-01

    This presentation described the fundamental and applied research work which has been carried out at the University of Alberta's Centre for Advanced Energy and Minerals Research to improve the undercarriage elements of large scale earthmoving equipment used in oil sands mining operations. A new method has been developed to predict the optimum curvature and blade geometry of earth moving equipment such as bulldozers and motor graders. A mathematical relationship has been found to approximate the optimum blade shape for reducing cutting resistance and fill resistance. The equation is a function of blade geometry and soil properties. It is the first model that can mathematically optimize the shape of a blade on earth moving equipment. A significant saving in undercarriage components can be achieved from reducing the amount of cutting and filling resistance for this type of equipment working on different soils. A Sprocket Carrier Roller for a Tracked Vehicle was also invented to replace the conventional cylindrical carrier roller. The new sprocket type carrier roller offers greater support for the drive track and other components of the undercarriage assembly. A unique retaining pin assembly has also been designed to detach connecting disposable wear parts from earthmoving equipment. The retaining pin assembly is easy to assemble and disassemble and includes reusable parts. 13 figs.

  1. Investigating Effective Components of Higher Education Marketing and Providing a Marketing Model for Iranian Private Higher Education Institutions

    Science.gov (United States)

    Kasmaee, Roya Babaee; Nadi, Mohammad Ali; Shahtalebi, Badri

    2016-01-01

    Purpose: The purpose of this paper is to study and identify the effective components of higher education marketing and providing a marketing model for Iranian higher education private sector institutions. Design/methodology/approach: This study is a qualitative research. For identifying the effective components of higher education marketing and…

  2. Mind the gaps: a state-space model for analysing the dynamics of North Sea herring spawning components

    DEFF Research Database (Denmark)

    Payne, Mark

    2010-01-01

    the other components, whereas the Downs component has been the slowest. These differences give rise to changes in stock composition, which are shown to vary widely within a relatively short time. The modelling framework provides a valuable tool for studying and monitoring the dynamics of the individual...

  3. Local Prediction Models on Mid-Atlantic Ridge MORB by Principal Component Regression

    Science.gov (United States)

    Ling, X.; Snow, J. E.; Chin, W.

    2017-12-01

    The isotopic compositions of the daughter isotopes of long-lived radioactive systems (Sr, Nd, Hf and Pb ) can be used to map the scale and history of mantle heterogeneities beneath mid-ocean ridges. Our goal is to relate the multidimensional structure in the existing isotopic dataset with an underlying physical reality of mantle sources. The numerical technique of Principal Component Analysis is useful to reduce the linear dependence of the data to a minimum set of orthogonal eigenvectors encapsulating the information contained (cf Agranier et al 2005). The dataset used for this study covers almost all the MORBs along mid-Atlantic Ridge (MAR), from 54oS to 77oN and 8.8oW to -46.7oW, including replicating the dataset of Agranier et al., 2005 published plus 53 basalt samples dredged and analyzed since then (data from PetDB). The principal components PC1 and PC2 account for 61.56% and 29.21%, respectively, of the total isotope ratios variability. The samples with similar compositions to HIMU and EM and DM are identified to better understand the PCs. PC1 and PC2 are accountable for HIMU and EM whereas PC2 has limited control over the DM source. PC3 is more strongly controlled by the depleted mantle source than PC2. What this means is that all three principal components have a high degree of significance relevant to the established mantle sources. We also tested the relationship between mantle heterogeneity and sample locality. K-means clustering algorithm is a type of unsupervised learning to find groups in the data based on feature similarity. The PC factor scores of each sample are clustered into three groups. Cluster one and three are alternating on the north and south MAR. Cluster two exhibits on 45.18oN to 0.79oN and -27.9oW to -30.40oW alternating with cluster one. The ridge has been preliminarily divided into 16 sections considering both the clusters and ridge segments. The principal component regression models the section based on 6 isotope ratios and PCs. The

  4. Diffusion layer modeling for condensation with multi-component noncondensable gases

    International Nuclear Information System (INIS)

    Peterson, P.F.

    1999-01-01

    Many condensation problems involving noncondensable gases have multiple noncondensable species, for example air (with nitrogen, oxygen, and other gases); and other problems where light gases like hydrogen may mix with heavier gases like nitrogen. Particularly when the binary mass diffusion coefficients of the noncondensable species are substantially different, the noncondensable species tend to segregate in the condensation boundary layer. This paper presents a fundamental analysis of the mass transport with multiple noncondensable species, identifying a simple method to calculate an effective mass diffusion coefficient that can be used with the simple diffusion layer model, and discusses in detail the effects of using mass and mole based quantities, and various simplifying approximations, on predicted condensation rates. The results are illustrated with quantitative examples to demonstrate the potential importance of multi-component noncondensable gas effects

  5. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  6. Atomistic modeling of the structural components of the blood-brain barrier

    Science.gov (United States)

    Glukhova, O. E.; Grishina, O. A.; Slepchenkov, M. M.

    2015-03-01

    Blood-brain barrier, which is a barrage system between the brain and blood vessels, plays a key role in the "isolation" of the brain of unnecessary information, and reduce the "noise" in the interneuron communication. It is known that the barrier function of the BBB strictly depends on the initial state of the organism and changes significantly with age and, especially in developing the "vascular accidents". Disclosure mechanisms of regulation of the barrier function will develop new ways to deliver neurotrophic drugs to the brain in the newborn. The aim of this work is the construction of atomistic models of structural components of the blood-brain barrier to reveal the mechanisms of regulation of the barrier function.

  7. Fault Detection of Reciprocating Compressors using a Model from Principles Component Analysis of Vibrations

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A D

    2012-01-01

    Traditional vibration monitoring techniques have found it difficult to determine a set of effective diagnostic features due to the high complexity of the vibration signals originating from the many different impact sources and wide ranges of practical operating conditions. In this paper Principal Component Analysis (PCA) is used for selecting vibration feature and detecting different faults in a reciprocating compressor. Vibration datasets were collected from the compressor under baseline condition and five common faults: valve leakage, inter-cooler leakage, suction valve leakage, loose drive belt combined with intercooler leakage and belt loose drive belt combined with suction valve leakage. A model using five PCs has been developed using the baseline data sets and the presence of faults can be detected by comparing the T 2 and Q values from the features of fault vibration signals with corresponding thresholds developed from baseline data. However, the Q -statistic procedure produces a better detection as it can separate the five faults completely.

  8. On the distribution of the stochastic component in SUE traffic assignment models

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker

    1997-01-01

    The paper discuss the use of different distributions of the stochastic component in SUE. A main conclusion is that they generally gave reasonable similar results, except for the LogNormal distribution which use is dissuaded. However, in cases with low link-costs (e.g. in dense urban areas, ramps...... and modelling of intersections and inter-changes), distributions with long tails (Gumbel and Normal) gave biased results com-pared with the Rectangular distribution. The Triangular distribution gave results somewhat between. Besides giving the most reasonable results, the Rectangular dis-tribution is the most...... calculation effective.All distributions gave a unique solution at link level after a sufficient large number of iterations (up to 1,000 at full-scale networks) while the usual aggregated measures of convergence converged quite fast (under 50 iterations). The tests also showed, that the distributions must...

  9. A Four-Component Model of Sexual Orientation & Its Application to Psychotherapy.

    Science.gov (United States)

    Bowins, Brad

    Distress related to sexual orientation is a common focus in psychotherapy. In some instances the distress is external in nature as with persecution, and in others it is internal as with self-acceptance issues. Complicating matters, sexual orientation is a very complex topic producing a great deal of confusion for both clients and therapists. The current paper provides a four component model-sexual orientation dimensions, activation of these dimensions, the role of erotic fantasy, and social construction of sexual orientation-that in combination provide a comprehensive perspective. Activation of dimensions is a novel contribution not proposed in any other model. With improved understanding of sexual orientation issues, and utilization of this knowledge to guide interventions, psychotherapists can improve outcomes with their clients. Also described is how dimensions of sexual orientation relate to transgender. In addition to improving psychotherapy outcomes, the fourcomponent model presented can help reduce discrimination and persecution, by demonstrating that the capacity for both homoerotic and heteroerotic behavior is universal.

  10. Model of modern strategic management of an enterprise: contents and components

    Directory of Open Access Journals (Sweden)

    I.T. Raykovska

    2015-09-01

    Full Text Available The article investigates different interpretations and definitions of the concept of strategic management. It also aims to identify the ways of revealing the components and peculiarities of the concept.On the base of the critical analysis of economic literature the author singles out process, target, and complex approaches to the interpretation of the essence of strategic management and indicates that strategic management is a complex concept that encompasses management of strategic opportunities and operative management of problems in real time to respond to unpredictable changes fast. According to the modern understanding of strategic management the author singles out its main peculiarities which presuppose ensuring of quick response of an enterprise to the changes of external environment with the help of already developed strategic methods and models, strategic thinking of employees of an economic entity aimed at achieving its development strategy. The parameters of comparison of operating and strategic management are systematized. It is established that operating management is centered on the search for the ways of better use of enterprise resources, while strategic management looks to the needs and changes of external environment, tracking and adapting to its changes, search for new possibilities in competitive environment. The conceptual model of strategic management of an enterprise is formed. It is stated that the use of the model enables to determine the place of strategic analysis in the discussed system and to ensure the fulfillment of strategic plans.

  11. Thermodynamically consistent modeling and simulation of multi-component two-phase flow with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2017-12-09

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is an attractive alternative recently over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of multiple fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.

  12. Evaluation of a Mathematical Model for Single Component Adsorption Equilibria with Reference to the Prediction of Multicomponent Adsorption Equilibria

    DEFF Research Database (Denmark)

    Krøll, Annette Elisabeth; Marcussen, Lis

    1997-01-01

    An equilibrium equation for pure component adsorption is compared to experiments and to the vacancy solution theory. The investigated equilibrium equation is a special case of a model for prediction of multicomponent adsorption equilibria.The vacancy solution theory for multicomponent systems...... requires binary experimental data for determining the interaction parameters of the Wilson equation; thus a large number of experiments are needed. The multicomponent equilibria model which is investigated for single component systems in this work is based on pure component data only. This means...... that the requirement for experimental data is reduced significantly.The two adsorption models are compared, using experimental pure gas adsorption data found in literature. The results obtained by the models are in close agreement for pure component equilibria and they give a good description of the experimental data...

  13. Physics-Based Stress Corrosion Cracking Component Reliability Model cast in an R7-Compatible Cumulative Damage Framework

    Energy Technology Data Exchange (ETDEWEB)

    Unwin, Stephen D.; Lowry, Peter P.; Layton, Robert F.; Toloczko, Mychailo B.; Johnson, Kenneth I.; Sanborn, Scott E.

    2011-07-01

    This is a working report drafted under the Risk-Informed Safety Margin Characterization pathway of the Light Water Reactor Sustainability Program, describing statistical models of passives component reliabilities.

  14. Advances in model-based software for simulating ultrasonic immersion inspections of metal components

    Science.gov (United States)

    Chiou, Chien-Ping; Margetan, Frank J.; Taylor, Jared L.; Engle, Brady J.; Roberts, Ronald A.

    2018-04-01

    Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was initiated in 2015 to repackage existing research-grade software into user-friendly tools for the rapid estimation of signal-to-noise ratio (SNR) for ultrasonic inspections of metals. The software combines: (1) a Python-based graphical user interface for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signals and backscattered grain noise characteristics. The later makes use the Thompson-Gray measurement model for the response from an internal defect, and the Thompson-Margetan independent scatterer model for backscattered grain noise. This paper, the third in the series [1-2], provides an overview of the ongoing modeling effort with emphasis on recent developments. These include the ability to: (1) treat microstructures where grain size, shape and tilt relative to the incident sound direction can all vary with depth; and (2) simulate C-scans of defect signals in the presence of backscattered grain noise. The simulation software can now treat both normal and oblique-incidence immersion inspections of curved metal components. Both longitudinal and shear-wave inspections are treated. The model transducer can either be planar, spherically-focused, or bi-cylindrically-focused. A calibration (or reference) signal is required and is used to deduce the measurement system efficiency function. This can be "invented" by the software using center frequency and bandwidth information specified by the user, or, alternatively, a measured calibration signal can be used. Defect types include flat-bottomed-hole reference reflectors, and spherical pores and inclusions. Simulation outputs include estimated defect signal amplitudes, root-mean-square values of grain noise amplitudes, and SNR as functions of the depth of the defect within the metal component. At any particular depth, the user can view

  15. Continuous Video Modeling to Prompt Completion of Multi-Component Tasks by Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Purrazzella, Kaitlin; Purrazzella, Kimberly

    2014-01-01

    This investigation examined the ability of four adults with moderate intellectual disability to complete multi-component tasks using continuous video modeling. Continuous video modeling, which is a newly researched application of video modeling, presents video in a "looping" format which automatically repeats playing of the video while…

  16. Model-Based Sensor Placement for Component Condition Monitoring and Fault Diagnosis in Fossil Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mobed, Parham [Texas Tech Univ., Lubbock, TX (United States); Pednekar, Pratik [West Virginia Univ., Morgantown, WV (United States); Bhattacharyya, Debangsu [West Virginia Univ., Morgantown, WV (United States); Turton, Richard [West Virginia Univ., Morgantown, WV (United States); Rengaswamy, Raghunathan [Texas Tech Univ., Lubbock, TX (United States)

    2016-01-29

    Design and operation of energy producing, near “zero-emission” coal plants has become a national imperative. This report on model-based sensor placement describes a transformative two-tier approach to identify the optimum placement, number, and type of sensors for condition monitoring and fault diagnosis in fossil energy system operations. The algorithms are tested on a high fidelity model of the integrated gasification combined cycle (IGCC) plant. For a condition monitoring network, whether equipment should be considered at a unit level or a systems level depends upon the criticality of the process equipment, its likeliness to fail, and the level of resolution desired for any specific failure. Because of the presence of a high fidelity model at the unit level, a sensor network can be designed to monitor the spatial profile of the states and estimate fault severity levels. In an IGCC plant, besides the gasifier, the sour water gas shift (WGS) reactor plays an important role. In view of this, condition monitoring of the sour WGS reactor is considered at the unit level, while a detailed plant-wide model of gasification island, including sour WGS reactor and the Selexol process, is considered for fault diagnosis at the system-level. Finally, the developed algorithms unify the two levels and identifies an optimal sensor network that maximizes the effectiveness of the overall system-level fault diagnosis and component-level condition monitoring. This work could have a major impact on the design and operation of future fossil energy plants, particularly at the grassroots level where the sensor network is yet to be identified. In addition, the same algorithms developed in this report can be further enhanced to be used in retrofits, where the objectives could be upgrade (addition of more sensors) and relocation of existing sensors.

  17. Component characterization and predictive modeling for green roof substrates optimized to adsorb P and improve runoff quality: A review.

    Science.gov (United States)

    Jennett, Tyson S; Zheng, Youbin

    2018-06-01

    This review is a synthesis of the current knowledge regarding the effects of green roof substrate components and their retentive capacity for nutrients, particularly phosphorus (P). Substrates may behave as either sources or sinks of P depending on the components they are formulated from, and to date, the total P-adsorbing capacity of a substrate has not been quantified as the sum of the contributions of its components. Few direct links have been established among substrate components and their physicochemical characteristics that would affect P-retention. A survey of recent literature presented herein highlights the trends within individual component selection (clays and clay-like material, organics, conventional soil and sands, lightweight inorganics, and industrial wastes and synthetics) for those most common during substrate formulation internationally. Component selection will vary with respect to ease of sourcing component materials, cost of components, nutrient-retention capacity, and environmental sustainability. However, the number of distinct components considered for inclusion in green roof substrates continues to expand, as the desires of growers, material suppliers, researchers and industry stakeholders are incorporated into decision-making. Furthermore, current attempts to characterize the most often used substrate components are also presented whereby runoff quality is correlated to entire substrate performance. With the use of well-described characterization (constant capacitance model) and modeling techniques (the soil assemblage model), it is proposed that substrates optimized for P adsorption may be developed through careful selection of components with prior knowledge of their chemical properties, that may increase retention of P in plant-available forms, thereby reducing green roof fertilizer requirements and P losses in roof runoff. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Analysis and Modeling of the Galvanic Skin Response Spontaneous Component in the context of Intelligent Biofeedback Systems Development

    Science.gov (United States)

    Unakafov, A.

    2009-01-01

    The paper presents an approach to galvanic skin response (GSR) spontaneous component analysis and modeling. In the study a classification of biofeedback training methods is given, importance of intelligent methods development is shown. The INTENS method, which is perspective for intellectualization, is presented. An important problem of biofeedback training method intellectualization - estimation of the GSR spontaneous component - is solved in the main part of the work. Its main characteristics are described; results of GSR spontaneous component modeling are shown. Results of small research of an optimum material for GSR probes are presented.

  19. Manufacturing Hydraulic Components for the Primary Double Entry S-Pump Model

    Directory of Open Access Journals (Sweden)

    S. Iu. Kuptsov

    2015-01-01

    Full Text Available The article describes a new design of the primary pump to run in powerful units (more than 1 GW of power plants. The new construction has some advantages such as compactness, theoretical lack of radial and axial forces, and high efficiency in a wide range of flow. The abovementioned advantages can be possible owing to applying an innovative shape of the pump flow path. An impeller with the guide vanes forms the three-row single stage in the each row axial double entry blade system. The inlet and outlet parts have a shape of the involute that can ensure (according to calculated data the efficiency and stability in a wide range of flow because of a lack of the spiral parts. The results of numerical calculations of the pump working flow theoretically confirm that demanding parameters of the pump (H=286 m; Q=1,15 m3 /s can be obtained with competitive efficiency. To verify the proposed advantages of the construction, there was decision made to conduct the real physical experiment. For this purpose the small model of a real pump was designed with parameters H=14 m, Q=13 l/s. Construction of the pump model has a cartridge conception. In addition, there is a possibility for quick replacement of the some parts of the blade system in case of operational development of the pump. In order to obtain hydraulic characteristics of the pump to say nothing of the electromotor the torque gauge coupling is used. Numerical calculations for the pump model were also performed which confirm the operability. For manufacturing of the blade system the new perspective technology is applied. The main hydraulic components (impellers and guide vanes are made of ABS plastic by using 3D-printer. According to this technology parts are made layer by layer by means of welded plastic filament. Using this method the satisfactory tolerance (approximately ±0,3 mm of the parts was obtained. At that moment, it is possible to create the parts with the maximum size no higher than 150 mm

  20. Therapeutic benefits of a component of coffee in a rat model of Alzheimer's disease.

    Science.gov (United States)

    Basurto-Islas, Gustavo; Blanchard, Julie; Tung, Yunn Chyn; Fernandez, Jose R; Voronkov, Michael; Stock, Maxwell; Zhang, Sherry; Stock, Jeffry B; Iqbal, Khalid

    2014-12-01

    A minor component of coffee unrelated to caffeine, eicosanoyl-5-hydroxytryptamide (EHT), provides protection in a rat model for Alzheimer's disease (AD). In this model, viral expression of the phosphoprotein phosphatase 2A (PP2A) endogenous inhibitor, the I2(PP2A), or SET protein in the brains of rats leads to several characteristic features of AD including cognitive impairment, tau hyperphosphorylation, and elevated levels of cytoplasmic amyloid-β protein. Dietary supplementation with EHT for 6-12 months resulted in substantial amelioration of all these defects. The beneficial effects of EHT could be associated with its ability to increase PP2A activity by inhibiting the demethylation of its catalytic subunit PP2Ac. These findings raise the possibility that EHT may make a substantial contribution to the apparent neuroprotective benefits associated with coffee consumption as evidenced by numerous epidemiologic studies indicating that coffee drinkers have substantially lowered risk of developing AD. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. A Fault Prognosis Strategy Based on Time-Delayed Digraph Model and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ningyun Lu

    2012-01-01

    Full Text Available Because of the interlinking of process equipments in process industry, event information may propagate through the plant and affect a lot of downstream process variables. Specifying the causality and estimating the time delays among process variables are critically important for data-driven fault prognosis. They are not only helpful to find the root cause when a plant-wide disturbance occurs, but to reveal the evolution of an abnormal event propagating through the plant. This paper concerns with the information flow directionality and time-delay estimation problems in process industry and presents an information synchronization technique to assist fault prognosis. Time-delayed mutual information (TDMI is used for both causality analysis and time-delay estimation. To represent causality structure of high-dimensional process variables, a time-delayed signed digraph (TD-SDG model is developed. Then, a general fault prognosis strategy is developed based on the TD-SDG model and principle component analysis (PCA. The proposed method is applied to an air separation unit and has achieved satisfying results in predicting the frequently occurred “nitrogen-block” fault.

  2. Trajectory modeling of gestational weight: A functional principal component analysis approach.

    Directory of Open Access Journals (Sweden)

    Menglu Che

    Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.

  3. Gelation in a model 1-component system with adhesive hard-sphere interactions

    Science.gov (United States)

    Kim, Jung Min; Eberle, Aaron; Fang, Jun; Wagner, Norman

    2012-02-01

    Colloidal dispersions can undergo a dynamical arrest of the disperse phase leading to a system with solid-like properties when either the volume fraction or the interparticle potential is varied. Systems that contain low to moderate particulate concentrations form gels whereas higher concentrations lead to glassy states in which caging by nearest neighbors can be a significant contributor to the arrested long-time dynamics. Colloid polymer mixtures have been the prevalent model system for studying the effect of attraction, where attractions are entropically driven by depletion effects, in which gelation has been shown to be a result of phase separation [1]. Using the model 1-component octadecyl coated silica nanoparticle system, Eberle et al. [2] found the gel-line to intersect the spinodal to the left of the critical point, and at higher concentrations extended toward the mode coupling theory attractive driven glass line. . We continue this study by varying the particle diameter and find quantitative differences which we explain by gravity. 1. Lu, P.J., et al., Nature, 2008. 453(7194): p. 499-504.2. Eberle, A.P.R., N.J. Wagner, and R. Castaneda-Priego, Physical Review Letters, 2011. 106(10).

  4. Cyclic stress-strain behaviour under thermomechanical fatigue conditions - Modeling by means of an enhanced multi-component model

    Energy Technology Data Exchange (ETDEWEB)

    Christ, H J [Institut fuer Werkstofftechnik, Universitaet Siegen, D-57068 Siegen (Germany); Bauer, V, E-mail: hans-juergen.christ@uni-siegen.d [Wieland Werke AG, Graf-Arco Str. 36, D-89072 Ulm (Germany)

    2010-07-01

    The cyclic stress-strain behaviour of metals and alloys in cyclic saturation can reasonably be described by means of simple multi-component models, such as the model based on a parallel arrangement of elastic-perfectly plastic elements, which was originally proposed by Masing already in 1923. This model concept was applied to thermomechanical fatigue loading of two metallic engineering materials which were found to be rather oppositional with respect to cyclic plastic deformation. One material is an austenitic stainless steel of type AISI304L which shows dynamic strain aging (DSA) and serves as an example for a rather ductile alloy. A dislocation arrangement was found after TMF testing deviating characteristically from the corresponding isothermal microstructures. The second material is a third-generation near-gamma TiAl alloy which is characterized by a very pronounced ductile-to-brittle transition (DBT) within the temperature range of TMF cycling. Isothermal fatigue testing at temperatures below the DBT temperature leads to cyclic hardening, while cyclic softening was found to occur above DBT. The combined effect under TMF leads to a continuously developing mean stress. The experimental observations regarding isothermal and non-isothermal stress-strain behaviour and the correlation to the underlying microstructural processes was used to further develop the TMF multi-composite model in order to accurately predict the TMF stress-strain response by taking the alloy-specific features into account.

  5. Studies of Westward Electrojets and Field-Aligned Currents in the Magnetotail During Substorms: Implications for Magnetic Field Models

    Science.gov (United States)

    Spence, Harlan E.

    1996-01-01

    discrete features in the context of the global picture. We reported on our initial study at national and international meetings and published the results of our predictions of the low-altitude signatures of the plasma sheet. In addition, the PI was invited to contribute a publication to the so-called 'Great Debate in Space Physics' series that is a feature of EOS. The topic was on the nature of magnetospheric substorms. Specific questions of the when and where a substorm occurs and the connection between the auroral and magnetospheric components were discussed in that paper. This paper therefore was derived exclusively from the research supported by this grant. Attachment: Empirical modeling of the quite time nightside magnetosphere.' 'CRRES observations of particle flux dropout event.' The what, where, when, and why of magnetospheric substorm triggers'. and 'Low altitude signature of the plasma sheet: model prediction of local time dependence'.

  6. Model compounds for heavy crude oil components and tetrameric acids: Characterization and interfacial behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Nordgaard, Erland Loeken

    2009-07-01

    The tendency during the past decades in the quality of oil reserves shows that conventional crude oil is gradually being depleted and the demand being replaced by heavy crude oils. These oils contain more of a class high-molecular weight components termed asphaltenes. This class is mainly responsible for stable water-in-crude oil emulsions. Both heavy and lighter crude oils in addition contain substantial amounts of naphthenic acids creating naphthenate deposits in topside facilities. The asphaltene class is defined by solubility and consists of several thousand different structures which may behave differently in oil-water systems. The nature of possible sub fractions of the asphaltene has been received more attention lately, but still the properties and composition of such is not completely understood. In this work, the problem has been addressed by synthesizing model compounds for the asphaltenes, on the basis that an acidic function incorporated could be crucial. Such acidic, poly aromatic surfactants turned out to be highly inter facially active as studied by the pendant drop technique. Langmuir monolayer compressions combined with fluorescence of deposited films indicated that the interfacial activity was a result of an efficient packing of the aromatic cores in the molecules, giving stabilizing interactions at the o/w interface. Droplet size distributions of emulsions studied by PFG NMR and adsorption onto hydrophilic silica particles demonstrated the high affinity to o/w interfaces and that the efficient packing gave higher emulsion stability. Comparing to a model compound lacking the acidic group, it was obvious that sub fractions of asphaltenes that contain an acidic, or maybe similar hydrogen bonding functions, could be responsible for stable w/o emulsions. Indigenous tetrameric acids are the main constituent of calcium naphthenate deposits. Several synthetic model tetra acids have been prepared and their properties have been compared to the indigenous

  7. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    Science.gov (United States)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  8. The dynamical role of the central molecular ring within the framework of a seven-component Galaxy model

    Science.gov (United States)

    Simin, A. A.; Fridman, A. M.; Haud, U. A.

    1991-09-01

    A Galaxy model in which the surface density of the gas component has a sharp (two orders of magnitude) jump in the region of the outer radius of the molecular ring is constructed on the basis of observational data. This model is used to calculate the contributions of each population to the model curve of Galactic rotation. The value of the dimensionless increment of hydrodynamical instability for the gas component, being much less than 1, coincides with a similar magnitude for the same gas in the gravity field of the entire Galaxy. It is concluded that the unstable gas component of the Galaxy lies near the limit of the hydrodynamical instability, which is in accordance with the Le Chatelier principle. The stellar populations of the Galaxy probably do not affect the generation of the spiral structure in the gaseous component.

  9. Investigating Health Belief model component about sexual and reproductive health in college female students

    Directory of Open Access Journals (Sweden)

    Akram Aslani

    2016-05-01

    Full Text Available Background and objective: One of the critical steps in providing social and family health by concentrating on women's health is expanding sexual and reproductive health and addressing it in various aspects of the national and international level. Therefore in this study the goal is analyzing the components of the health belief model about sexual and reproductive health of female students of University of Medical Sciences of Shahroud. Methods: The present study is a cross-sectional analysis which conducted by participation of 397 female students of University of Medical Sciences of Shahroud in 2014. The data collecting tool was a questionnaire that was consisted of demographic information, knowledge and structures of health belief model. The data was analyzed by SPSS software and t-test and chi-square test. Results: The results showed that students had high self-efficacy (17.7 ± 2 in reproductive health care but the rate of their perceived barriers (3.02± 1.37 that was reported was almost high. Also there was a direct relation between demographic variable of age and the knowledge of students. The average score of students' awareness of sexually transmitted disease that was obtained was 9.97 ± 2.62. There was no significant relationship between age, marital status and their study major with structures of health belief model about sexually transmitted diseases and AIDS and its preventive behaviors. Conclusion: The findings of this study show that the self-efficacy of students about preventive behaviors of unwanted pregnancy and sexually transmitted diseases and AIDS is high. In other hand the average of perceived barriers in students is relatively high. Considering the findings it is recommended that sexual and reproductive health programs should be applied in order to reduce the barriers and to further increase the ability of young people. Paper Type: Research Article.

  10. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    Science.gov (United States)

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  11. Characterisation of a peripheral neuropathic component of the rat monoiodoacetate model of osteoarthritis.

    Directory of Open Access Journals (Sweden)

    Matthew Thakur

    Full Text Available Joint degeneration observed in the rat monoiodoacetate (MIA model of osteoarthritis shares many histological features with the clinical condition. The accompanying pain phenotype has seen the model widely used to investigate the pathophysiology of osteoarthritis pain, and for preclinical screening of analgesic compounds. We have investigated the pathophysiological sequellae of MIA used at low (1 mg or high (2 mg dose. Intra-articular 2 mg MIA induced expression of ATF-3, a sensitive marker for peripheral neuron stress/injury, in small and large diameter DRG cell profiles principally at levels L4 and 5 (levels predominated by neurones innervating the hindpaw rather than L3. At the 7 day timepoint, ATF-3 signal was significantly smaller in 1 mg MIA treated animals than in the 2 mg treated group. 2 mg, but not 1 mg, intra-articular MIA was also associated with a significant reduction in intra-epidermal nerve fibre density in plantar hindpaw skin, and produced spinal cord dorsal and ventral horn microgliosis. The 2 mg treatment evoked mechanical pain-related hypersensitivity of the hindpaw that was significantly greater than the 1 mg treatment. MIA treatment produced weight bearing asymmetry and cold hypersensitivity which was similar at both doses. Additionally, while pregabalin significantly reduced deep dorsal horn evoked neuronal responses in animals treated with 2 mg MIA, this effect was much reduced or absent in the 1 mg or sham treated groups. These data demonstrate that intra-articular 2 mg MIA not only produces joint degeneration, but also evokes significant axonal injury to DRG cells including those innervating targets outside of the knee joint such as hindpaw skin. This significant neuropathic component needs to be taken into account when interpreting studies using this model, particularly at doses greater than 1 mg MIA.

  12. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach

    Science.gov (United States)

    Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik

    2015-01-01

    Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10–150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes’ reported grapheme-color association. A mathematical model, based on Bundesen’s (1990) Theory of Visual Attention (TVA), was fitted to each observer’s data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group’s model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes’ expertise regarding their specific grapheme-color associations. PMID:26252019

  13. Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.

    Directory of Open Access Journals (Sweden)

    Árni Gunnar Ásgeirsson

    Full Text Available Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets as possible, while ignoring digit (distractors. Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990 Theory of Visual Attention (TVA, was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.

  14. A comparison of two-component and quadratic models to assess survival of irradiated stage-7 oocytes of Drosophila melanogaster

    International Nuclear Information System (INIS)

    Peres, C.A.; Koo, J.O.

    1981-01-01

    In this paper, the quadratic model to analyse data of this kind, i.e. S/S 0 = exp(-αD-bD 2 ), where S and Ssub(o) are defined as before is proposed is shown that the same biological interpretation can be given to the parameters α and A and to the parameters β and B. Furthermore it is shown that the quadratic model involves one probabilistic stage more than the two-component model, and therefore the quadratic model would perhaps be more appropriate as a dose-response model for survival of irradiated stage-7 oocytes of Drosophila melanogaster. In order to apply these results, the data presented by Sankaranarayanan and Sankaranarayanan and Volkers are reanalysed using the quadratic model. It is shown that the quadratic model fits better than the two-component model to the data in most situations. (orig./AJ)

  15. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    Science.gov (United States)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion

  16. Thermodynamically consistent modeling and simulation of multi-component two-phase flow model with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2016-11-25

    A general diffuse interface model with a realistic equation of state (e.g. Peng-Robinson equation of state) is proposed to describe the multi-component two-phase fluid flow based on the principles of the NVT-based framework which is a latest alternative over the NPT-based framework to model the realistic fluids. The proposed model uses the Helmholtz free energy rather than Gibbs free energy in the NPT-based framework. Different from the classical routines, we combine the first law of thermodynamics and related thermodynamical relations to derive the entropy balance equation, and then we derive a transport equation of the Helmholtz free energy density. Furthermore, by using the second law of thermodynamics, we derive a set of unified equations for both interfaces and bulk phases that can describe the partial miscibility of two fluids. A relation between the pressure gradient and chemical potential gradients is established, and this relation leads to a new formulation of the momentum balance equation, which demonstrates that chemical potential gradients become the primary driving force of fluid motion. Moreover, we prove that the proposed model satisfies the total (free) energy dissipation with time. For numerical simulation of the proposed model, the key difficulties result from the strong nonlinearity of Helmholtz free energy density and tight coupling relations between molar densities and velocity. To resolve these problems, we propose a novel convex-concave splitting of Helmholtz free energy density and deal well with the coupling relations between molar densities and velocity through very careful physical observations with a mathematical rigor. We prove that the proposed numerical scheme can preserve the discrete (free) energy dissipation. Numerical tests are carried out to verify the effectiveness of the proposed method.

  17. Condition Prediction Model and Component Interaction Fault Tree for Heat Distribution Systems

    National Research Council Canada - National Science Library

    Marsh, Charles

    2001-01-01

    .... Frequent, detailed inspection is largely impractical, and components are subject to complex, obscure interdependencies that can create seemingly unrelated distresses virtually anywhere in the system...

  18. Subretinal Pigment Epithelial Deposition of Drusen Components Including Hydroxyapatite in a Primary Cell Culture Model.

    Science.gov (United States)

    Pilgrim, Matthew G; Lengyel, Imre; Lanzirotti, Antonio; Newville, Matt; Fearn, Sarah; Emri, Eszter; Knowles, Jonathan C; Messinger, Jeffrey D; Read, Russell W; Guidry, Clyde; Curcio, Christine A

    2017-02-01

    Extracellular deposits containing hydroxyapatite, lipids, proteins, and trace metals that form between the basal lamina of the RPE and the inner collagenous layer of Bruch's membrane are hallmarks of early AMD. We examined whether cultured RPE cells could produce extracellular deposits containing all of these molecular components. Retinal pigment epithelium cells isolated from freshly enucleated porcine eyes were cultured on Transwell membranes for up to 6 months. Deposit composition and structure were characterized using light, fluorescence, and electron microscopy; synchrotron x-ray diffraction and x-ray fluorescence; secondary ion mass spectroscopy; and immunohistochemistry. Apparently functional primary RPE cells, when cultured on 10-μm-thick inserts with 0.4-μm-diameter pores, can produce sub-RPE deposits that contain hydroxyapatite, lipids, proteins, and trace elements, without outer segment supplementation, by 12 weeks. The data suggest that sub-RPE deposit formation is initiated, and probably regulated, by the RPE, as well as the loss of permeability of the Bruch's membrane and choriocapillaris complex associated with age and early AMD. This cell culture model of early AMD lesions provides a novel system for testing new therapeutic interventions against sub-RPE deposit formation, an event occurring well in advance of the onset of vision loss.

  19. Damage Detection of Refractory Based on Principle Component Analysis and Gaussian Mixture Model

    Directory of Open Access Journals (Sweden)

    Changming Liu

    2018-01-01

    Full Text Available Acoustic emission (AE technique is a common approach to identify the damage of the refractories; however, there is a complex problem since there are as many as fifteen involved parameters, which calls for effective data processing and classification algorithms to reduce the level of complexity. In this paper, experiments involving three-point bending tests of refractories were conducted and AE signals were collected. A new data processing method of merging the similar parameters in the description of the damage and reducing the dimension was developed. By means of the principle component analysis (PCA for dimension reduction, the fifteen related parameters can be reduced to two parameters. The parameters were the linear combinations of the fifteen original parameters and taken as the indexes for damage classification. Based on the proposed approach, the Gaussian mixture model was integrated with the Bayesian information criterion to group the AE signals into two damage categories, which accounted for 99% of all damage. Electronic microscope scanning of the refractories verified the two types of damage.

  20. Nuclear fuel cycle system simulation tool based on high-fidelity component modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ames, David E.,

    2014-02-01

    The DOE is currently directing extensive research into developing fuel cycle technologies that will enable the safe, secure, economic, and sustainable expansion of nuclear energy. The task is formidable considering the numerous fuel cycle options, the large dynamic systems that each represent, and the necessity to accurately predict their behavior. The path to successfully develop and implement an advanced fuel cycle is highly dependent on the modeling capabilities and simulation tools available for performing useful relevant analysis to assist stakeholders in decision making. Therefore a high-fidelity fuel cycle simulation tool that performs system analysis, including uncertainty quantification and optimization was developed. The resulting simulator also includes the capability to calculate environmental impact measures for individual components and the system. An integrated system method and analysis approach that provides consistent and comprehensive evaluations of advanced fuel cycles was developed. A general approach was utilized allowing for the system to be modified in order to provide analysis for other systems with similar attributes. By utilizing this approach, the framework for simulating many different fuel cycle options is provided. Two example fuel cycle configurations were developed to take advantage of used fuel recycling and transmutation capabilities in waste management scenarios leading to minimized waste inventories.

  1. EBaLM-THP - A neural network thermohydraulic prediction model of advanced nuclear system components

    International Nuclear Information System (INIS)

    Ridluan, Artit; Manic, Milos; Tokuhiro, Akira

    2009-01-01

    In lieu of the worldwide energy demand, economics and consensus concern regarding climate change, nuclear power - specifically near-term nuclear power plant designs are receiving increased engineering attention. However, as the nuclear industry is emerging from a lull in component modeling and analyses, optimization for example using ANN has received little research attention. This paper presents a neural network approach, EBaLM, based on a specific combination of two training algorithms, error-back propagation (EBP), and Levenberg-Marquardt (LM), applied to a problem of thermohydraulics predictions (THPs) of advanced nuclear heat exchangers (HXs). The suitability of the EBaLM-THP algorithm was tested on two different reference problems in thermohydraulic design analysis; that is, convective heat transfer of supercritical CO 2 through a single tube, and convective heat transfer through a printed circuit heat exchanger (PCHE) using CO 2 . Further, comparison of EBaLM-THP and a polynomial fitting approach was considered. Within the defined reference problems, the neural network approach generated good results in both cases, in spite of highly fluctuating trends in the dataset used. In fact, the neural network approach demonstrated cumulative measure of the error one to three orders of magnitude smaller than that produce via polynomial fitting of 10th order

  2. Microclimatic models. Estimation of components of the energy balance over land surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Heikinheimo, M.; Venaelaeinen, A.; Tourula, T. [Finnish Meteorological Inst., Helsinki (Finland). Air Quality Dept.

    1996-12-31

    Climates at regional scale are strongly dependent on the interaction between atmosphere and its lower boundary, the oceans and the land surface mosaic. Land surfaces influence climate through their albedo, and the aerodynamic roughness, the processes of the biosphere and many soil hydrological properties; all these factors vary considerably geographically. Land surfaces receive a certain portion of the solar irradiance depending on the cloudiness, atmospheric transparency and surface albedo. Short-wave solar irradiance is the source of the heat energy exchange at the earth`s surface and also regulates many biological processes, e.g. photosynthesis. Methods for estimating solar irradiance, atmospheric transparency and surface albedo were reviewed during the course of this project. The solar energy at earth`s surface is consumed for heating the soil and the lower atmosphere. Where moisture is available, evaporation is one of the key components of the surface energy balance, because the conversion of liquid water into water vapour consumes heat. The evaporation process was studied by carrying out field experiments and testing parameterisation for a cultivated agricultural surface and for lakes. The micrometeorological study over lakes was carried out as part of the international `Northern Hemisphere Climatic Processes Experiment` (NOPEX/BAHC) in Sweden. These studies have been aimed at a better understanding of the energy exchange processes of the earth`s surface-atmosphere boundary for a more accurate and realistic parameterisation of the land surface in atmospheric models

  3. Microclimatic models. Estimation of components of the energy balance over land surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Heikinheimo, M; Venaelaeinen, A; Tourula, T [Finnish Meteorological Inst., Helsinki (Finland). Air Quality Dept.

    1997-12-31

    Climates at regional scale are strongly dependent on the interaction between atmosphere and its lower boundary, the oceans and the land surface mosaic. Land surfaces influence climate through their albedo, and the aerodynamic roughness, the processes of the biosphere and many soil hydrological properties; all these factors vary considerably geographically. Land surfaces receive a certain portion of the solar irradiance depending on the cloudiness, atmospheric transparency and surface albedo. Short-wave solar irradiance is the source of the heat energy exchange at the earth`s surface and also regulates many biological processes, e.g. photosynthesis. Methods for estimating solar irradiance, atmospheric transparency and surface albedo were reviewed during the course of this project. The solar energy at earth`s surface is consumed for heating the soil and the lower atmosphere. Where moisture is available, evaporation is one of the key components of the surface energy balance, because the conversion of liquid water into water vapour consumes heat. The evaporation process was studied by carrying out field experiments and testing parameterisation for a cultivated agricultural surface and for lakes. The micrometeorological study over lakes was carried out as part of the international `Northern Hemisphere Climatic Processes Experiment` (NOPEX/BAHC) in Sweden. These studies have been aimed at a better understanding of the energy exchange processes of the earth`s surface-atmosphere boundary for a more accurate and realistic parameterisation of the land surface in atmospheric models

  4. Mathematical modelling of ultrasonic testing of components with defects close to a non-planar surface

    International Nuclear Information System (INIS)

    Westlund, Jonathan; Bostroem, Anders

    2011-05-01

    Nondestructive testing with ultrasound is a standard procedure in the nuclear power industry. To develop and qualify the methods extensive experimental work with test blocks is usually required. This can be very time-consuming and costly and it also requires a good physical intuition of the situation. A reliable mathematical model of the testing situation can, therefore, be very valuable and cost-effective as it can reduce experimental work significantly. A good mathematical model enhances the physical intuition and is very useful for parametric studies, as a pedagogical tool, and for the qualification of procedures and personnel. The aim of the present report is to describe work that has been performed to model ultrasonic testing of components that contain a defect close to a nonplanar surface. For nuclear power applications this may be a crack or other defect on the inside of a pipe with a diameter change or connection. This is an extension of the computer program UTDefect, which previously only admits a planar back surface (which is often applicable also to pipes if the pipe diameter is large enough). The problems are investigated in both 2D and 3D, and in 2D both the simpler anti-plane (SH) and the in-plane (P-SV) problem are studied. The 2D investigations are primarily solved to get a 'feeling' for the solution procedure, the discretizations, etc. In all cases an integral equation approach with a Green's function in the kernel is taken. The nonplanar surface is treated by the boundary element method (BEM) where a division of the surface is made in small elements. The defects are mainly cracks, strip-like (in 2D) or rectangular (in 3D), and these are treated with more analytical methods. In 2D also more general defects are treated with the help of their transition (T) matrix. As in other parts of UTDefect the ultrasonic probes in transmission and reception are included in the model. In 3D normalization by a side drilled hole is possible. Some numerical results

  5. Principal components based support vector regression model for on-line instrument calibration monitoring in NPPs

    International Nuclear Information System (INIS)

    Seo, In Yong; Ha, Bok Nam; Lee, Sung Woo; Shin, Chang Hoon; Kim, Seong Jun

    2010-01-01

    In nuclear power plants (NPPs), periodic sensor calibrations are required to assure that sensors are operating correctly. By checking the sensor's operating status at every fuel outage, faulty sensors may remain undetected for periods of up to 24 months. Moreover, typically, only a few faulty sensors are found to be calibrated. For the safe operation of NPP and the reduction of unnecessary calibration, on-line instrument calibration monitoring is needed. In this study, principal component based auto-associative support vector regression (PCSVR) using response surface methodology (RSM) is proposed for the sensor signal validation of NPPs. This paper describes the design of a PCSVR-based sensor validation system for a power generation system. RSM is employed to determine the optimal values of SVR hyperparameters and is compared to the genetic algorithm (GA). The proposed PCSVR model is confirmed with the actual plant data of Kori Nuclear Power Plant Unit 3 and is compared with the Auto-Associative support vector regression (AASVR) and the auto-associative neural network (AANN) model. The auto-sensitivity of AASVR is improved by around six times by using a PCA, resulting in good detection of sensor drift. Compared to AANN, accuracy and cross-sensitivity are better while the auto-sensitivity is almost the same. Meanwhile, the proposed RSM for the optimization of the PCSVR algorithm performs even better in terms of accuracy, auto-sensitivity, and averaged maximum error, except in averaged RMS error, and this method is much more time efficient compared to the conventional GA method

  6. Validation of the dynamic structural integrity of a nuclear piping component using static inelastic modelling technique

    International Nuclear Information System (INIS)

    Leonard, J.W.

    1975-01-01

    This work is concerned with the evaluation of a quasi-static method as applied to a swing check valve designed to provide emergency shut-off capability subsequent to a postulated break in a steam line. The impact analysis of swinging disk upon the valve seat is an asymmetric problem in dynamic elastoplasticity with potentially large displacements and strains resulting from the impact. To perform a quasi-static analysis for this component the disk and seat region of the valve was isolated from the piping system by special boundary elements and an elastic-plastic finite element model was generated assuming axisymmetric solid ring elements. An equivalent static axisymmetric incremental load system was used to approximate the nonsymmetric initial velocity of impact. Subsequent to the nonlinear incremental finite element analysis by a standard computer software package (MARC-CDC program), a special post-processing program was employed to calculate the incremental sum of external work due to the defined load system. Equating this external work to the initial kinetic energy of impact, parametric curves for displacements, stresses, and strains were obtained as functions of various levels of kinetic energy imparted to the valve at closure. To verify the conservative nature of the assumptions made in the quasi-static model, a comparison was made with a time-dependent, nonlinear, axisymmetric, elastic-plastic finite difference simulation. Another standard computer software package (PISCES-2DL) was used for this dynamic simulation. For a check-point value of initial impact kinetic energy, correlation between the quasi-static finite element and dynamic finite difference analyses is presented. Validations of the assumptions made in the quasi-static analysis and of the results obtained are discussed in detail

  7. Flight service evaluation of composite components on the Bell Helicopter model 206L: Design, fabrication and testing

    Science.gov (United States)

    Zinberg, H.

    1982-01-01

    The design, fabrication, and testing phases of a program to obtain long term flight service experience on representative helicopter airframe structural components operating in typical commercial environments are described. The aircraft chosen is the Bell Helicopter Model 206L. The structural components are the forward fairing, litter door, baggage door, and vertical fin. The advanced composite components were designed to replace the production parts in the field and were certified by the FAA to be operable through the full flight envelope of the 206L. A description of the fabrication process that was used for each of the components is given. Static failing load tests on all components were done. In addition fatigue tests were run on four specimens that simulated the attachment of the vertical fin to the helicopter's tail boom.

  8. A kinetic model for impact/sliding wear of pressurized water reactor internal components. Application to rod cluster control assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Zbinden, M; Durbec, V

    1996-12-01

    A new concept of industrial wear model adapted to components of nuclear plants is proposed. Its originality is to be supported, on one hand, by experimental results obtained via wear machines of relatively short operational times, and, on the other hand, by the information obtained from the operating feedback over real wear kinetics of the reactors components. The proposed model is illustrated by an example which corresponds to a specific real situation. The determination of the coefficients permitting to cover all assembly of configurations and the validation of the model in these configurations have been the object of the most recent work. (author). 34 refs.

  9. A kinetic model for impact/sliding wear of pressurized water reactor internal components. Application to rod cluster control assemblies

    International Nuclear Information System (INIS)

    Zbinden, M.; Durbec, V.

    1996-12-01

    A new concept of industrial wear model adapted to components of nuclear plants is proposed. Its originality is to be supported, on one hand, by experimental results obtained via wear machines of relatively short operational times, and, on the other hand, by the information obtained from the operating feedback over real wear kinetics of the reactors components. The proposed model is illustrated by an example which corresponds to a specific real situation. The determination of the coefficients permitting to cover all assembly of configurations and the validation of the model in these configurations have been the object of the most recent work. (author)

  10. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for

  11. Computer-aided process planning in prismatic shape die components based on Standard for the Exchange of Product model data

    Directory of Open Access Journals (Sweden)

    Awais Ahmad Khan

    2015-11-01

    Full Text Available Insufficient technologies made good integration between the die components in design, process planning, and manufacturing impossible in the past few years. Nowadays, the advanced technologies based on Standard for the Exchange of Product model data are making it possible. This article discusses the three main steps for achieving the complete process planning for prismatic parts of the die components. These three steps are data extraction, feature recognition, and process planning. The proposed computer-aided process planning system works as part of an integrated system to cover the process planning of any prismatic part die component. The system is built using Visual Basic with EWDraw system for visualizing the Standard for the Exchange of Product model data file. The system works successfully and can cover any type of sheet metal die components. The case study discussed in this article is taken from a large design of progressive die.

  12. SR-Site Data report. THM modelling of buffer, backfill and other system components

    Energy Technology Data Exchange (ETDEWEB)

    Aakesson, Mattias; Boergesson, Lennart; Kristensson, Ola (Clay Technology AB, Lund (Sweden))

    2010-03-15

    This report is a supplement to the SR-Site data report. Based on the issues raised in the Process reports concerning THM processes in buffer, backfill and other system components, 22 modelling tasks have been identified, representing different aspects of the repository evolution. The purpose of this data report is to provide parameter values for the materials included in these tasks. Two codes, Code{_}Bright and Abaqus, have been employed for the tasks. The data qualification has focused on the bentonite material for buffer, backfill and the seals for tunnel plugs and bore-holes. All these system components have been treated as if they were based on MX-80 bentonite. The sources of information and documentation of the data qualification for the parameters for MX-80 have been listed. A substantial part of the refinement, especially concerning parameters used for Code{_}Bright, is presented in the report. The data qualification has been performed through a motivated and transparent chain; from measurements, via evaluations, to parameter determinations. The measured data was selected to be as recent, traceable and independent as possible. The data sets from this process are thus regarded to be qualified. The conditions for which the data is supplied, the conceptual uncertainties, the spatial and temporal variability and correlations are briefly presented and discussed. A more detailed discussion concerning the data uncertainty due to precision, bias and representativity is presented for measurements of swelling pressure, hydraulic conductivity, shear strength, retention properties and thermal conductivity. The results from the data qualification are presented as a detailed evaluation of measured data. In order to strengthen the relevance of the parameter values and to confirm previously used relations, either newer or independent measurements have been taken into account in the parameter value evaluation. Previously used relations for swelling pressure, hydraulic

  13. SR-Site Data report. THM modelling of buffer, backfill and other system components

    International Nuclear Information System (INIS)

    Aakesson, Mattias; Boergesson, Lennart; Kristensson, Ola

    2010-03-01

    This report is a supplement to the SR-Site data report. Based on the issues raised in the Process reports concerning THM processes in buffer, backfill and other system components, 22 modelling tasks have been identified, representing different aspects of the repository evolution. The purpose of this data report is to provide parameter values for the materials included in these tasks. Two codes, Code B right and Abaqus, have been employed for the tasks. The data qualification has focused on the bentonite material for buffer, backfill and the seals for tunnel plugs and bore-holes. All these system components have been treated as if they were based on MX-80 bentonite. The sources of information and documentation of the data qualification for the parameters for MX-80 have been listed. A substantial part of the refinement, especially concerning parameters used for Code B right, is presented in the report. The data qualification has been performed through a motivated and transparent chain; from measurements, via evaluations, to parameter determinations. The measured data was selected to be as recent, traceable and independent as possible. The data sets from this process are thus regarded to be qualified. The conditions for which the data is supplied, the conceptual uncertainties, the spatial and temporal variability and correlations are briefly presented and discussed. A more detailed discussion concerning the data uncertainty due to precision, bias and representativity is presented for measurements of swelling pressure, hydraulic conductivity, shear strength, retention properties and thermal conductivity. The results from the data qualification are presented as a detailed evaluation of measured data. In order to strengthen the relevance of the parameter values and to confirm previously used relations, either newer or independent measurements have been taken into account in the parameter value evaluation. Previously used relations for swelling pressure, hydraulic

  14. A general mixed boundary model reduction method for component mode synthesis

    NARCIS (Netherlands)

    Voormeeren, S.N.; Van der Valk, P.L.C.; Rixen, D.J.

    2010-01-01

    A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the “Mixed

  15. Analysis of detached recombining plasmas by collisonal-radiative model with energetic electron component

    International Nuclear Information System (INIS)

    Ohno, N.; Motoyama, M.; Takamura, S.

    2001-01-01

    using CR model for a helium plasma (Goto-Fujimoto code), in which the energetic electron component (electron beam) is taken into account in addition to the bulk electron Maxwellian distribution function. It is found that the evaluated bulk electron temperature with the method of Boltzmann plot tends to decrease with an increase in the electron beam density and/or energy because the population densities in relatively lower excited states become large, comparing with those in higher excited state. This result agrees with the experimental observations. We have also analyzed transition of recombining plasma to ionizing one and vice versa in detail. This analysis can reproduce the inverse ELM phenomena observed in JET and ASDEX-U. (orig.)

  16. Efficient scattering-angle enrichment for a nonlinear inversion of the background and perturbations components of a velocity model

    KAUST Repository

    Wu, Zedong

    2017-07-04

    Reflection-waveform inversion (RWI) can help us reduce the nonlinearity of the standard full-waveform inversion (FWI) by inverting for the background velocity model using the wave-path of a single scattered wavefield to an image. However, current RWI implementations usually neglect the multi-scattered energy, which will cause some artifacts in the image and the update of the background. To improve existing RWI implementations in taking multi-scattered energy into consideration, we split the velocity model into background and perturbation components, integrate them directly in the wave equation, and formulate a new optimization problem for both components. In this case, the perturbed model is no longer a single-scattering model, but includes all scattering. Through introducing a new cheap implementation of scattering angle enrichment, the separation of the background and perturbation components can be implemented efficiently. We optimize both components simultaneously to produce updates to the velocity model that is nonlinear with respect to both the background and the perturbation. The newly introduced perturbation model can absorb the non-smooth update of the background in a more consistent way. We apply the proposed approach on the Marmousi model with data that contain frequencies starting from 5 Hz to show that this method can converge to an accurate velocity starting from a linearly increasing initial velocity. Also, our proposed method works well when applied to a field data set.

  17. Principal component analysis and neurocomputing-based models for total ozone concentration over different urban regions of India

    Science.gov (United States)

    Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi

    2012-07-01

    The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.

  18. Expression of cellular components in granulomatous inflammatory response in Piaractus mesopotamicus model.

    Directory of Open Access Journals (Sweden)

    Wilson Gómez Manrique

    Full Text Available The present study aimed to describe and characterize the cellular components during the evolution of chronic granulomatous inflammation in the teleost fish pacus (P. mesopotamicus induced by Bacillus Calmette-Guerin (BCG, using S-100, iNOS and cytokeratin antibodies. 50 fish (120±5.0 g were anesthetized and 45 inoculated with 20 μL (40 mg/mL (2.0 x 10(6 CFU/mg and five inoculated with saline (0,65% into muscle tissue in the laterodorsal region. To evaluate the inflammatory process, nine fish inoculated with BCG and one control were sampled in five periods: 3rd, 7th, 14th, 21st and 33rd days post-inoculation (DPI. Immunohistochemical examination showed that the marking with anti-S-100 protein and anti-iNOS antibodies was weak, with a diffuse pattern, between the third and seventh DPI. From the 14th to the 33rd day, the marking became stronger and marked the cytoplasm of the macrophages. Positivity for cytokeratin was initially observed in the 14th DPI, and the stronger immunostaining in the 33rd day, period in which the epithelioid cells were more evident and the granuloma was fully formed. Also after the 14th day, a certain degree of cellular organization was observed, due to the arrangement of the macrophages around the inoculated material, with little evidence of edema. The arrangement of the macrophages around the inoculum, the fibroblasts, the lymphocytes and, in most cases, the presence of melanomacrophages formed the granuloma and kept the inoculum isolated in the 33rd DPI. The present study suggested that the granulomatous experimental model using teleost fish P. mesopotamicus presented a similar response to those observed in mammals, confirming its importance for studies of chronic inflammatory reaction.

  19. Disentangling the associations between parental BMI and offspring body composition using the four‐component model

    Science.gov (United States)

    Grijalva‐Eternod, Carlos; Cortina‐Borja, Mario; Williams, Jane; Fewtrell, Mary; Wells, Jonathan

    2016-01-01

    ABSTRACT Objectives This study sets out to investigate the intergenerational associations between the body mass index (BMI) of parents and the body composition of their offspring. Methods The cross‐sectional data were analyzed for 511 parent–offspring trios from London and south‐east England. The offspring were aged 5–21 years. Parental BMI was obtained by recall and offspring fat mass and lean mass were obtained using the four‐component model. Multivariable regression analysis, with multiple imputation for missing paternal values was used. Sensitivity analyses for levels of non‐paternity were conducted. Results A positive association was seen between parental BMI and offspring BMI, fat mass index (FMI), and lean mass index (LMI). The mother's BMI was positively associated with the BMI, FMI, and LMI z‐scores of both daughters and sons and of a similar magnitude for both sexes. The father's BMI showed similar associations to the mother's BMI, with his son's BMI, FMI, and LMI z‐scores, but no association with his daughter. Sensitivity tests for non‐paternity showed that maternal coefficients remained greater than paternal coefficients throughout but there was no statistical difference at greater levels of non‐paternity. Conclusions We found variable associations between parental BMI and offspring body composition. Associations were generally stronger for maternal than paternal BMI, and paternal associations appeared to differ between sons and daughters. In this cohort, the mother's BMI was statistically significantly associated with her child's body composition but the father's BMI was only associated with the body composition of his sons. Am. J. Hum. Biol. 28:524–533, 2016. © 2016 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc. PMID:26848813

  20. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.