WorldWideScience

Sample records for feature scale modeling

  1. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  2. Active Shape Models Using Scale Invariant Feature Transform

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new active shape models (ASMs) was presented, which is driven by scale invariant feature transform (SIFT) local descriptor instead of normalizing first order derivative profiles in the original formulation, to segment lung fields from chest radiographs. The modified SIFT local descriptor, more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel at each resolution level during the segmentation optimization procedure. Experimental results show that the proposed method is more robust and accurate than the original ASMs in terms of an average overlap percentage and average contour distance in segmenting the lung fields from an available public database.

  3. Ontology-based analysis of multi-scale modeling of geographical features

    Institute of Scientific and Technical Information of China (English)

    WANG; Yanhui; LI; Xiaojuan; GONG; Huili

    2006-01-01

    As multi-scale databases based on scale series of map data are built, conceptual models are needed to define proper multi-scale representation formulas and to extract model entities and the relationships among them. However, the results of multi-scale conceptual abstraction schema may differ, according to which cognition, abstraction and application views are utilized, which presents an obvious obstacle to the reuse and sharing of spatial data. To facilitate the design of unified, common and objective abstract schema views for multi-scale spatial databases, this paper proposes an ontology-based analysis method for the multi-scale modeling of geographical features. It includes a three-layer ontology model, which serves as the framework for common multi-scale abstraction schema; an explanation of formulary abstractions accompanied by definitions of entities and their relationships at the same scale, as well as different scales,which are meant to provide strong feasibility, expansibility and speciality functions; and a case in point involving multi-scale representations of road features, to verify the method's feasibility.

  4. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    Science.gov (United States)

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down.

  5. Scale Effect Features During Simulation Tests of 3D Printer-Made Vane Pump Models

    Directory of Open Access Journals (Sweden)

    A. I. Petrov

    2015-01-01

    Full Text Available The article "Scale effect features during simulation tests of 3D printer-made vane pump models" discusses the influence of scale effect on translation of pump parameters from models, made with 3D-prototyping methods, to full-scale pumps. Widely spread now 3D-printer production of pump model parts or entire layouts can be considered to be the main direction of vane pumps modeling. This is due to the widespread development of pumps in different CAD-systems and the significant cost reduction in manufacturing such layouts, as compared to casting and other traditional methods.The phenomenon of scale effect in vane hydraulic machines, i.e. violation of similarity conditions when translating pump parameters from model to full-scale pumps is studied in detail in the theory of similarity. However, as the experience in the 3d-printer manufacturing of models and their testing gains it becomes clear that accounting large-scale effect for such models has a number of differences from the conventional techniques. The reason for this is the features of micro and macro geometry of parts made in different kinds of 3D-printers (extrusive, and powder sintering methods, ultraviolet light, etc..The article considers the converting features of external and internal mechanical losses, leakages, and hydraulic losses, as well as the specifics of the balance tests for such models. It also presents the basic conversion formulas describing the factors affecting the value of these losses. It shows photographs of part surfaces of models, manufactured by 3D-printer and subjected to subsequent machining. The paper shows results of translation from several pump models (layouts to the full-scale ones, using the techniques described, and it also shows that the error in translation efficiency does not exceed 1.15%. The conclusion emphasizes the importance of the balance tests of models to accumulate statistical data on the scale effect for pump layouts made by different 3D

  6. Discriminative phenomenological features of scale invariant models for electroweak symmetry breaking

    Directory of Open Access Journals (Sweden)

    Katsuya Hashino

    2016-01-01

    Full Text Available Classical scale invariance (CSI may be one of the solutions for the hierarchy problem. Realistic models for electroweak symmetry breaking based on CSI require extended scalar sectors without mass terms, and the electroweak symmetry is broken dynamically at the quantum level by the Coleman–Weinberg mechanism. We discuss discriminative features of these models. First, using the experimental value of the mass of the discovered Higgs boson h(125, we obtain an upper bound on the mass of the lightest additional scalar boson (≃543 GeV, which does not depend on its isospin and hypercharge. Second, a discriminative prediction on the Higgs-photon–photon coupling is given as a function of the number of charged scalar bosons, by which we can narrow down possible models using current and future data for the di-photon decay of h(125. Finally, for the triple Higgs boson coupling a large deviation (∼+70% from the SM prediction is universally predicted, which is independent of masses, quantum numbers and even the number of additional scalars. These models based on CSI can be well tested at LHC Run II and at future lepton colliders.

  7. Investigating small scale transient deformation features in convergent settings- Insights from analogue modeling

    Science.gov (United States)

    Santimano, T. N.; Rosenau, M.; Oncken, O.

    2013-12-01

    The evolution of a convergent orogenic belt can be dissected into a combination of small scale events. Deformation in the orogenic belts can range in a timescale from earthquake cycle to millions of years. Moreover, long term deformation trends are a composition of the smaller events that together create the final geometry of an orogen. Therefore, it is important to understand the complexity of these events in order to further understand large scale mechanics of deformation. In this study, we employ analogue models of sand wedges representing the brittle upper crust to visualize temporal and spatial deformation in a convergent setting. The time-series evolution of these convergent sand wedges is monitored by Particle Image Velocimetry (PIV). In addition, the stress change within the wedge, especially at the localization of strain i.e. faulting events and between fault events is monitored by a force sensor. The sensor is attached to the back wall, in the experimental setup and against which the sand wedge grows. In these experiments the effect of basal friction on the final geometry of the wedge is tested. This parameter is varied twice. Results show that displacement data from the PIV system, analyzed in the form of strain correlates well with data from the force sensor. On a larger scale, force increases (indicating a linear trend) until strain is localized and a fault is formed causing a sudden drop in force and a release of stress. The magnitude of the force drop after a fault has occurred is related mainly to the horizontal length of the fault. However between fault events, recordings of the force measurements show a cyclic pattern with a decreasing frequency towards a fault event. Over time and as the wedge grows and matures, this intra fault frequency decreases as well. Varying basal friction demonstrates a cutoff in the maximum stress a wedge can handle due to the strength of its base. Time-series image analysis of strain combined with stress analysis

  8. Finite element modeling of small-scale tapered wood-laminated composite poles with biomimicry features

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; R.C. Tang; Chung Y. Hse

    2008-01-01

    Tapered composite poles with biomimicry features as in bamboo are a new generation of wood laminated composite poles that may some day be considered as an alternative to solid wood poles that are widely used in the transmission and telecommunication fields. Five finite element models were developed with ANSYS to predict and assess the performance of five types of...

  9. Observational Features of Large-Scale Structures as Revealed by the Catastrophe Model of Solar Eruptions

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Large-scale magnetic structures are the main carrier of major eruptions in the solar atmosphere. These structures are rooted in the photosphere and are driven by the unceasing motion of the photospheric material through a series of equilibrium configurations. The motion brings energy into the coronal magnetic field until the system ceases to be in equilibrium. The catastrophe theory for solar eruptions indicates that loss of mechanical equilibrium constitutes the main trigger mechanism of major eruptions, usually shown up as solar flares,eruptive prominences, and coronal mass ejections (CMEs). Magnetic reconnection which takes place at the very beginning of the eruption as a result of plasma instabilities/turbulence inside the current sheet, converts magnetic energy into heating and kinetic energy that are responsible for solar flares, and for accelerating both plasma ejecta (flows and CMEs) and energetic particles. Various manifestations are thus related to one another, and the physics behind these relationships is catastrophe and magnetic reconnection. This work reports on recent progress in both theoretical research and observations on eruptive phenomena showing the above manifestations. We start by displaying the properties of large-scale structures in the corona and the related magnetic fields prior to an eruption, and show various morphological features of the disrupting magnetic fields. Then, in the framework of the catastrophe theory,we look into the physics behind those features investigated in a succession of previous works,and discuss the approaches they used.

  10. Modeling of the effects of die scale features on bulk plasma conditions in plasma etching equipment

    Energy Technology Data Exchange (ETDEWEB)

    Grapperhaus, M.J.; Kushner, M.J. [Univ. of Illinois, Urbana, IL (United States)

    1997-12-31

    The patterning of the wafer during microelectronics fabrication can have a significant effect on bulk plasma properties as well as producing local pattern dependent etch rates. Sputtering of photoresist, loading effects, and other surface interactions couple the chemistry at the wafer surface to the bulk plasma chemistry. A model has been developed which uses a Monte Carlo simulation to model quasi-steady state die scale surface chemistry in plasma etching reactors. This model is integrated within the Hybrid Plasma Equipment Model (HPEM) which resolves two-dimensional reactor scale plasma conditions. The HPEM consists of an electromagnetics, electron Monte Carlo simulation, and a fluid plasma modules. The surface Monte Carlo simulation is used to modify the flux boundary condition at the wafer surface within the HPEM. Species which react on the surface, or which are created at the surface are tracked on and near the wafer surface.this gives a quasi-steady state surface chemistry reaction mechanism resolved in two dimensions on the die scale. An inductively coupled etching reactor is used to demonstrate the effect of surface chemistry on bulk plasma conditions over a range of pressures from 10 to 100 mtorr, 100`s w of inductively coupled power and 10`s to 100`s V rf applied substrate voltage. Under high etch rate conditions, macroloading effects are shown. As pressure is varied from 10 to 100 mtorr and the effect of local photoresist sputter and redeposit on nearby exposed etch area is shown to increase which leads to different etch rates near the boundaries of etching regions versus unexposed regions.

  11. Viscous flow features in scaled-up physical models of normal and pathological vocal phonation

    Energy Technology Data Exchange (ETDEWEB)

    Erath, Byron D., E-mail: berath@purdue.ed [School of Mechanical Engineering, Purdue University, 585 Purdue Mall, West Lafayette, IN 47907 (United States); Plesniak, Michael W., E-mail: plesniak@gwu.ed [Department of Mechanical and Aerospace Engineering, George Washington University, 801 22nd Street NW, Suite 739, Washington, DC 20052 (United States)

    2010-06-15

    Unilateral vocal fold paralysis results when the recurrent laryngeal nerve, which innervates the muscles of the vocal folds becomes damaged. The loss of muscle and tension control to the damaged vocal fold renders it ineffectual. The mucosal wave disappears during phonation, and the vocal fold becomes largely immobile. The influence of unilateral vocal fold paralysis on the viscous flow development, which impacts speech quality within the glottis during phonation was investigated. Driven, scaled-up vocal fold models were employed to replicate both normal and pathological patterns of vocal fold motion. Spatial and temporal velocity fields were captured using particle image velocimetry, and laser Doppler velocimetry. Flow parameters were scaled to match the physiological values associated with human speech. Loss of motion in one vocal fold resulted in a suppression of typical glottal flow fields, including decreased spatial variability in the location of the flow separation point throughout the phonatory cycle, as well as a decrease in the vorticity magnitude.

  12. Implications of Small-Scale Flow Features to Modeling Dispersion over Complex Terrain.

    Science.gov (United States)

    Banta, R. M.; Olivier, L. D.; Gudiksen, P. H.; Lange, R.

    1996-03-01

    Small-scale, topographically forced wind systems often have a strong influence on flow over complex terrain. A problem is that these systems are very difficult to measure, because of their limited spatial and temporal extent. They can be important, however, in the atmospheric transport of hazardous materials. For example, a nocturnal exit jet-a narrow stream of cold air-which flowed from Eldorado Canyon at the interface between the Rocky Mountains and the Colorado plains near the Rocky Flats Plant (RFP), swept over RFP for about 3 h in the middle of the night of 4 5 February 1991. It extended in depth from a few tens of meters to approximately 800 m above the ground. Because the jet was so narrow (2 km wide), it was poorly sampled by the meteorological surface mesonet, but it did prove to have an effect on the dispersion of tracer material released from RFP, producing a secondary peak in measured concentration to the southeast of RFP. The existence and behavior of the jet was documented by Environment Technology Laboratoy's Doppler lidar system, a scanning, active remote-sensing system that provides fine-resolution wind measurements. The lidar was deployed as a part of a wintertime study of flow and dispersion in the RFP vicinity during February 1993.The MATHEW-ADPIC atmospheric dispersion model was run using the case study data from this night. It consists of three major modules: an interpolation scheme; MATHEW, a diagnostic wind-flow algorithm that calculates a mass-consistent interpolated flow; and ADPIC, a diffusion algorithm. The model did an adequate job of representing the main lobe of the tracer transport, but the secondary lobe resulting from the Eldorado Canyon exit jet was absent from the model result. Because the jet was not adequately represented in the input data, it did not appear in the modeled wind field. Thus, the effects of the jet on the transport of tracer material were not properly simulated by the diagnostic model.

  13. Large-scale features of Pliocene climate: results from the Pliocene Model Intercomparison Project

    OpenAIRE

    A. M. Haywood; D. J. Hill; Dolan, A. M.; B. L. Otto-Bliesner; F. Bragg; Chan, W.-L.; Chandler, M. A.; Contoux, C.; H. J. Dowsett; A. Jost; Y. Kamae; Lohmann, G.; Lunt, D. J.; Abe-Ouchi, A.; Pickering, S.J.

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied. Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-model/data intercomparison. Whilst commonalities in model outputs for the Pliocene are cle...

  14. Large-scale features of Pliocene climate: results from the Pliocene Model Intercomparison Project

    Directory of Open Access Journals (Sweden)

    A. M. Haywood

    2012-07-01

    Full Text Available Climate and environments of the mid-Pliocene Warm Period (3.264 to 3.025 Ma have been extensively studied. Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a co-ordinated multi-model and multi-model/data intercomparison. Whilst commonalities in model outputs for the Pliocene are evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data/model comparison highlights the potential for models to underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Sensitivity tests exploring the "known unknowns" in modelling Pliocene climate specifically relevant to the high-latitudes are also essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses. Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS, suggest that ESS is greater than Climate Sensitivity (CS, and that the ratio of ESS to CS is between 1 and 2, with a best estimate of 1.5.

  15. Large-scale features of Pliocene climate: results from the Pliocene Model Intercomparison Project

    Directory of Open Access Journals (Sweden)

    A. M. Haywood

    2013-01-01

    Full Text Available Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma have been extensively studied. Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-model/data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data/model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses. Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS, support previous work suggesting that ESS is greater than Climate Sensitivity (CS, and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  16. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    Science.gov (United States)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; Kamae, Y.; Lohmann, G.; Lunt, D. J.; Abe-Ouchi, A.; Pickering, S. J.; Ramstein, G.; Rosenbloom, N. A.; Salzmann, U.; Sohl, L.; Stepanek, C.; Ueda, H.; Yan, Q.; Zhang, Z.

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  17. Modeling oxygen isotopes in the Pliocene: Large-scale features over the land and ocean

    Science.gov (United States)

    Tindall, Julia C.; Haywood, Alan M.

    2015-09-01

    The first isotope-enabled general circulation model (GCM) simulations of the Pliocene are used to discuss the interpretation of δ18O measurements for a warm climate. The model suggests that spatial patterns of Pliocene ocean surface δ18O (δ18Osw) were similar to those of the preindustrial period; however, Arctic and coastal regions were relatively depleted, while South Atlantic and Mediterranean regions were relatively enriched. Modeled δ18Osw anomalies are closely related to modeled salinity anomalies, which supports using δ18Osw as a paleosalinity proxy. Modeled Pliocene precipitation δ18O (δ18Op) was enriched relative to the preindustrial values (but with depletion of temperature anomalies; however, the relationship is neither linear nor spatially coincident: a large δ18Op signal does not always translate to a large temperature signal. These results suggest that isotope modeling can lead to enhanced synergy between climate models and climate proxy data. The model can relate proxy data to climate in a physically based way even when the relationship is complex and nonlocal. The δ18O-climate relationships, identified here from a GCM, could not be determined from transfer functions or simple models.

  18. Multi-Scale Salient Features for Analyzing 3D Shapes

    Institute of Scientific and Technical Information of China (English)

    Yong-Liang Yang; Chao-Hui Shen

    2012-01-01

    Extracting feature regions on mesh models is crucial for shape analysis and understanding.It can be widely used for various 3D content-based applications in graphics and geometry field.In this paper,we present a new algorithm of extracting multi-scale salient features on meshes.This is based on robust estimation of curvature on multiple scales.The coincidence between salient feature and the scale of interest can be established straightforwardly,where detailed feature appears on small scale and feature with more global shape information shows up on large scale.We demonstrate this kind of multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection.Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes.

  19. Hopewell Furnace NHS Small Scale Features (Linear Features)

    Data.gov (United States)

    National Park Service, Department of the Interior — This shapefile represents the linear small scale features found at Hopewell Furnace National Historic Site based on the Cultural Landscape Report completed in...

  20. Featured Invention: Laser Scaling Device

    Science.gov (United States)

    Dunn, Carol Anne

    2008-01-01

    In September 2003, NASA signed a nonexclusive license agreement with Armor Forensics, a subsidiary of Armor Holdings, Inc., for the laser scaling device under the Innovative Partnerships Program. Coupled with a measuring program, also developed by NASA, the unit provides crime scene investigators with the ability to shoot photographs at scale without having to physically enter the scene, analyzing details such as bloodspatter patterns and graffiti. This ability keeps the scene's components intact and pristine for the collection of information and evidence. The laser scaling device elegantly solved a pressing problem for NASA's shuttle operations team and also provided industry with a useful tool. For NASA, the laser scaling device is still used to measure divots or damage to the shuttle's external tank and other structures around the launchpad. When the invention also met similar needs within industry, the Innovative Partnerships Program provided information to Armor Forensics for licensing and marketing the laser scaling device. Jeff Kohler, technology transfer agent at Kennedy, added, "We also invited a representative from the FBI's special photography unit to Kennedy to meet with Armor Forensics and the innovator. Eventually the FBI ended up purchasing some units. Armor Forensics is also beginning to receive interest from DoD [Department of Defense] for use in military crime scene investigations overseas."

  1. Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Pedersen, Thomas;

    2015-01-01

    This paper presents an offline approach to analyzing feature interactions in embedded systems. The approach consists of a systematic process to gather the necessary information about system components and their models. The model is first specified in terms of predicates, before being refined to t...... to timed automata. The consistency of the model is verified at different development stages, and the correct linkage between the predicates and their semantic model is checked. The approach is illustrated on a use case from home automation....

  2. Feature Technology in Product Modeling

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xu; NING Ruxin

    2006-01-01

    A unified feature definition is proposed. Feature is form-concentrated, and can be used to model product functionalities, assembly relations, and part geometries. The feature model is given and a feature classification is introduced including functional, assembly, structural, and manufacturing features. A prototype modeling system is developed in Pro/ENGINEER that can define the assembly and user-defined form features.

  3. Feature Scaling via Second-Order Cone Programming

    Directory of Open Access Journals (Sweden)

    Zhizheng Liang

    2016-01-01

    Full Text Available Feature scaling has attracted considerable attention during the past several decades because of its important role in feature selection. In this paper, a novel algorithm for learning scaling factors of features is proposed. It first assigns a nonnegative scaling factor to each feature of data and then adopts a generalized performance measure to learn the optimal scaling factors. It is of interest to note that the proposed model can be transformed into a convex optimization problem: second-order cone programming (SOCP. Thus the scaling factors of features in our method are globally optimal in some sense. Several experiments on simulated data, UCI data sets, and the gene data set are conducted to demonstrate that the proposed method is more effective than previous methods.

  4. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    Science.gov (United States)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  5. A NOVEL SHIP DETECTION METHOD FOR LARGE-SCALE OPTICAL SATELLITE IMAGES BASED ON VISUAL LBP FEATURE AND VISUAL ATTENTION MODEL

    Directory of Open Access Journals (Sweden)

    S. Haigang

    2016-06-01

    Full Text Available Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP. Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM. After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  6. Genomic Feature Models

    DEFF Research Database (Denmark)

    Sørensen, Peter; Edwards, Stefan McKinnon; Rohde, Palle Duun

    Whole-genome sequences and multiple trait phenotypes from large numbers of individuals will soon be available in many populations. Well established statistical modeling approaches enable the genetic analyses of complex trait phenotypes while accounting for a variety of additive and non-additive g...... regions and gene ontologies) that provide better model fit and increase predictive ability of the statistical model for this trait....

  7. Multi scale feature based matched filter processing

    Institute of Scientific and Technical Information of China (English)

    LI Jun; HOU Chaohuan

    2004-01-01

    Using the extreme difference of self-similarity and kurtosis at large level scale of wavelet transform approximation between the PTFM (Pulse Trains of Frequency Modulated)signals and its reverberation, a feature-based matched filter method using the classify-beforedetect paragriam is proposed to improve the detection performance in reverberation and multipath environments. Processing the data of lake-trails showed that the processing gain of the proposed method is bigger than that of matched filter about 10 dB. In multipath environments, detection performance of matched filter become badly poorer, while that of the proposed method is improved better. It shows that the method is much more robust with the effect of multipath.

  8. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically ex

  9. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  10. Feature Hashing for Large Scale Multitask Learning

    CERN Document Server

    Weinberger, Kilian; Attenberg, Josh; Langford, John; Smola, Alex

    2009-01-01

    Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.

  11. Drift Scale THM Model

    Energy Technology Data Exchange (ETDEWEB)

    J. Rutqvist

    2004-10-07

    This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because

  12. Genome-Scale Models

    DEFF Research Database (Denmark)

    Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel

    2016-01-01

    An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical pr...

  13. Component Composition Using Feature Models

    DEFF Research Database (Denmark)

    Eichberg, Michael; Klose, Karl; Mitschke, Ralf;

    2010-01-01

    In general, components provide and require services and two components are bound if the first component provides a service required by the second component. However, certain variability in services - w.r.t. how and which functionality is provided or required - cannot be described using standard...... interface description languages. If this variability is relevant when selecting a matching component then human interaction is required to decide which components can be bound. We propose to use feature models for making this variability explicit and (re-)enabling automatic component binding. In our...... approach, feature models are one part of service specifications. This enables to declaratively specify which service variant is provided by a component. By referring to a service's variation points, a component that requires a specific service can list the requirements on the desired variant. Using...

  14. n-SIFT: n-dimensional scale invariant feature transform.

    Science.gov (United States)

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  15. Kernel based visual tracking with scale invariant features

    Institute of Scientific and Technical Information of China (English)

    Risheng Han; Zhongliang Jing; Yuanxiang Li

    2008-01-01

    The kernel based tracking has two disadvantages:the tracking window size cannot be adjusted efficiently,and the kernel based color distribution may not have enough ability to discriminate object from clutter background.FDr boosting up the feature's discriminating ability,both scale invariant features and kernel based color distribution features are used as descriptors of tracked object.The proposed algorithm can keep tracking object of varying scales even when the surrounding background is similar to the object's appearance.

  16. Discrete Feature Model (DFM) User Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))

    2008-06-15

    This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this

  17. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  18. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  19. SMALL SCALE MORPHODYNAMICAL MODELLING

    Institute of Scientific and Technical Information of China (English)

    D. Ditschke; O. Gothel; H. Weilbeer

    2001-01-01

    Long term morphological simulations using complete coupled models lead to very time consuming computations. Latteux (1995) presented modelling techniques developed for tidal current situations in order to reduce the computational effort. In this paper the applicability of such methods to small scale problems is investigated. It is pointed out that these methods can be transferred to small scale problems using the periodicity of the vortex shedding process.

  20. Integrating Local Scale Drainage Measures in Meso Scale Catchment Modelling

    Directory of Open Access Journals (Sweden)

    Sandra Hellmers

    2017-01-01

    Full Text Available This article presents a methodology to optimize the integration of local scale drainage measures in catchment modelling. The methodology enables to zoom into the processes (physically, spatially and temporally where detailed physical based computation is required and to zoom out where lumped conceptualized approaches are applied. It allows the definition of parameters and computation procedures on different spatial and temporal scales. Three methods are developed to integrate features of local scale drainage measures in catchment modelling: (1 different types of local drainage measures are spatially integrated in catchment modelling by a data mapping; (2 interlinked drainage features between data objects are enabled on the meso, local and micro scale; (3 a method for modelling multiple interlinked layers on the micro scale is developed. For the computation of flow routing on the meso scale, the results of the local scale measures are aggregated according to their contributing inlet in the network structure. The implementation of the methods is realized in a semi-distributed rainfall-runoff model. The implemented micro scale approach is validated with a laboratory physical model to confirm the credibility of the model. A study of a river catchment of 88 km2 illustrated the applicability of the model on the regional scale.

  1. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The pur

  2. Return of feature-based cost modeling

    Science.gov (United States)

    Creese, Robert C.; Patrawala, Taher B.

    1998-10-01

    Feature Based Cost Modeling is thought of as a relative new approach to cost modeling, but feature based cost modeling had considerable development in the 1950's. Considerable work was published in the 1950's by Boeing on cost for various casting processes--sand casting, die casting, investment casting and permanent mold casting--as a function of a single casting feature, casting volume. Additional approaches to feature based cost modeling have been made, and this work is a review of previous works and a proposed integrated model to feature based cost modeling.

  3. Correlated Non-Parametric Latent Feature Models

    CERN Document Server

    Doshi-Velez, Finale

    2012-01-01

    We are often interested in explaining data through a set of hidden factors or features. When the number of hidden features is unknown, the Indian Buffet Process (IBP) is a nonparametric latent feature model that does not bound the number of active features in dataset. However, the IBP assumes that all latent features are uncorrelated, making it inadequate for many realworld problems. We introduce a framework for correlated nonparametric feature models, generalising the IBP. We use this framework to generate several specific models and demonstrate applications on realworld datasets.

  4. AN ADVANCED SCALE INVARIANT FEATURE TRANSFORM ALGORITHM FOR FACE RECOGNITION

    OpenAIRE

    Mohammad Mohsen Ahmadinejad; Elizabeth Sherly

    2016-01-01

    In computer vision, Scale-invariant feature transform (SIFT) algorithm is widely used to describe and detect local features in images due to its excellent performance. But for face recognition, the implementation of SIFT was complicated because of detecting false key-points in the face image due to irrelevant portions like hair style and other background details. This paper proposes an algorithm for face recognition to improve recognition accuracy by selecting relevant SIFT key-points only th...

  5. Forecasting decadal and shorter time-scale solar cycle features

    Science.gov (United States)

    Dikpati, Mausumi

    2016-07-01

    Solar energetic particles and magnetic fields reach the Earth through the interplanetary medium and affect it in various ways, producing beautiful aurorae, but also electrical blackouts and damage to our technology-dependent economy. The root of energetic solar outputs is the solar activity cycle, which is most likely caused by dynamo processes inside the Sun. It is a formidable task to accurately predict the amplitude, onset and peak timings of a solar cycle. After reviewing all solar cycle prediction methods, including empirical as well as physical model-based schemes, I will describe what we have learned from both validation and nonvalidation of cycle 24 forecasts, and how to refine the model-based schemes for upcoming cycle 25 forecasts. Recent observations indicate that within a solar cycle there are shorter time-scale 'space weather' features, such as bursts of various forms of activity with approximately one year periodicity. I will demonstrate how global tachocline dynamics could play a crucial role in producing such space weather. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  6. Multi Scale Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Huemmer, Matthias [AREVA NP GmbH, Paul-Gossen Strasse 100, Erlangen (Germany)

    2008-07-01

    The safety of the Reactor Pressure Vessels (RPV) must be assured and demonstrated by safety assessments against brittle fracture according to the codes and standards. In addition to these deterministic methods, researchers developed statistic methods, so called local approach (LA) models, to predict specimen or component failure. These models transfer the microscopic fracture events to the macro scale by means of Weibull stresses and therefore can describe the fracture behavior more accurate. This paper will propose a recently developed LA model. After the calibration of the model parameters the wide applicability of the model will be demonstrated. Therefore a large number of computations, based on 3D finite element simulations, have been conducted, containing different specimen types and materials in unirradiated and irradiated condition. Comparison of the experimental data with the predictions attained by means of the LA model shows that the fracture behavior can be well described. (authors)

  7. A multidimensional representation model of geographic features

    Science.gov (United States)

    Usery, E. Lynn; Timson, George; Coletti, Mark

    2016-01-28

    A multidimensional model of geographic features has been developed and implemented with data from The National Map of the U.S. Geological Survey. The model, programmed in C++ and implemented as a feature library, was tested with data from the National Hydrography Dataset demonstrating the capability to handle changes in feature attributes, such as increases in chlorine concentration in a stream, and feature geometry, such as the changing shoreline of barrier islands over time. Data can be entered directly, from a comma separated file, or features with attributes and relationships can be automatically populated in the model from data in the Spatial Data Transfer Standard format.

  8. Selective attention to temporal features on nested time scales.

    Science.gov (United States)

    Henry, Molly J; Herrmann, Björn; Obleser, Jonas

    2015-02-01

    Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined temporal features. The current study aimed to identify and characterize the neural network specifically involved in this feature-selective attention to time. We used a novel paradigm where listeners judged either the duration or modulation rate of auditory stimuli, and in which the stimulation, working memory demands, response requirements, and task difficulty were held constant. A first analysis identified all brain regions where individual brain activation patterns were correlated with individual behavioral performance patterns, which thus supported temporal judgments generically. A second analysis then isolated those brain regions that specifically regulated selective attention to temporal features: Neural responses in a bilateral fronto-parietal network including insular cortex and basal ganglia decreased with degree of change of the attended temporal feature. Critically, response patterns in these regions were inverted when the task required selectively ignoring this feature. The results demonstrate how the neural analysis of complex acoustic stimuli with multiple temporal features depends on a fronto-parietal network that simultaneously regulates the selective gain for attended and ignored temporal features.

  9. Orthogonal design for scale invariant feature transform optimization

    Science.gov (United States)

    Ding, Xintao; Luo, Yonglong; Yi, Yunyun; Jie, Biao; Wang, Taochun; Bian, Weixin

    2016-09-01

    To improve object recognition capabilities in applications, we used orthogonal design (OD) to choose a group of optimal parameters in the parameter space of scale invariant feature transform (SIFT). In the case of global optimization (GOP) and local optimization (LOP) objectives, our aim is to show the operation of OD on the SIFT method. The GOP aims to increase the number of correctly detected true matches (NoCDTM) and the ratio of NoCDTM to all matches. In contrast, the LOP mainly aims to increase the performance of recall-precision. In detail, we first abstracted the SIFT method to a 9-way fixed-effect model with an interaction. Second, we designed a mixed orthogonal array, MA(64,23420,2), and its header table to optimize the SIFT parameters. Finally, two groups of parameters were obtained for GOP and LOP after orthogonal experiments and statistical analyses were implemented. Our experiments on four groups of data demonstrate that compared with the state-of-the-art methods, GOP can access more correct matches and is more effective against object recognition. In addition, LOP is favorable in terms of the recall-precision.

  10. Music Genre Classification using the multivariate AR feature integration model

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Meng, Anders

    2005-01-01

    informative decisions about musical genre. For the MIREX music genre contest several authors derive long time features based either on statistical moments and/or temporal structure in the short time features. In our contribution we model a segment (1.2 s) of short time features (texture) using a multivariate......Music genre classification systems are normally build as a feature extraction module followed by a classifier. The features are often short-time features with time frames of 10-30ms, although several characteristics of music require larger time scales. Thus, larger time frames are needed to take...... autoregressive model. Other authors have applied simpler statistical models such as the mean-variance model, which also has been included in several of this years MIREX submissions, see e.g. Tzanetakis (2005); Burred (2005); Bergstra et al. (2005); Lidy and Rauber (2005)....

  11. Bidirectional scale-invariant feature transform feature matching algorithms based on priority k-d tree search

    National Research Council Canada - National Science Library

    Liu, XiangShao; Zhou, Shangbo; Li, Hua; Li, Kun

    2016-01-01

    In this article, a bidirectional feature matching algorithm and two extended algorithms based on the priority k-d tree search are presented for the image registration using scale-invariant feature transform features...

  12. A computational model for feature binding

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The "Binding Problem" is an important problem across many disciplines, including psychology, neuroscience, computational modeling, and even philosophy. In this work, we proposed a novel computational model, Bayesian Linking Field Model, for feature binding in visual perception, by combining the idea of noisy neuron model, Bayesian method, Linking Field Network and competitive mechanism. Simulation Experiments demonstrated that our model perfectly fulfilled the task of feature binding in visual perception and provided us some enlightening idea for future research.

  13. A computational model for feature binding

    Institute of Scientific and Technical Information of China (English)

    SHI ZhiWei; SHI ZhongZhi; LIU Xi; SHI ZhiPing

    2008-01-01

    The "Binding Problem" is an important problem across many disciplines, including psychology, neuroscience, computational modeling, and even philosophy. In this work, we proposed a novel computational model, Bayesian Linking Field Model, for feature binding in visual perception, by combining the idea of noisy neuron model, Bayesian method, Linking Field Network and competitive mechanism.Simulation Experiments demonstrated that our model perfectly fulfilled the task of feature binding in visual perception and provided us some enlightening idea for future research.

  14. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built...

  15. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  16. The Future of Primordial Features with Large-Scale Structure Surveys

    CERN Document Server

    Chen, Xingang; Huang, Zhiqi; Namjoo, Mohammad Hossein; Verde, Licia

    2016-01-01

    Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopic and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce t...

  17. Featured Image: Modeling Supernova Remnants

    Science.gov (United States)

    Kohler, Susanna

    2016-05-01

    This image shows a computer simulation of the hydrodynamics within a supernova remnant. The mixing between the outer layers (where color represents the log of density) is caused by turbulence from the Rayleigh-Taylor instability, an effect that arises when the expanding core gas of the supernova is accelerated into denser shell gas. The past standard for supernova-evolution simulations was to perform them in one dimension and then, in post-processing, manually smooth out regions that undergo Rayleigh-Taylor turbulence (an intrinsically multidimensional effect). But in a recent study, Paul Duffell (University of California, Berkeley) has explored how a 1D model could be used to reproduce the multidimensional dynamics that occur in turbulence from this instability. For more information, check out the paper below!CitationPaul C. Duffell 2016 ApJ 821 76. doi:10.3847/0004-637X/821/2/76

  18. Local-Scale Simulations of Nucleate Boiling on Micrometer Featured Surfaces: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moreno, Gilberto [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-03

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulations pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.

  19. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  20. Thermal Behavior of Unusual Local-Scale Features on Vesta

    Science.gov (United States)

    Tosi, Federico; Capria, Maria Teresa; DeSanctis, Maria Cristina; Palomba, Ernesto; Capaccioni, Fabrizio; Combe, Jean-Philippe; Titus, Timothy; Mittlefehldt, David W.; Li, Jian-Yang; Russell, Christopher T.

    2012-01-01

    On Vesta, the thermal behavior of areas of unusual albedo seen at the local scale can be related to physical properties that can provide information about the origin of those materials. Dawn's Visible and Infrared Mapping Spectrometer (VIR) hyperspectral cubes are used to retrieve surface temperatures and emissivities, with high accuracy as long as temperatures are greater than 180 K. Data acquired in the Survey phase (23 July through 29 August 2011) show several unusual surface features: 1) high-albedo (bright) and low-albedo (dark) material deposits, 2) spectrally distinct ejecta and pitted materials, 3) regions suggesting finer-grained materials. Some of the unusual dark and bright features were re-observed by VIR in the subsequent High-Altitude Mapping Orbit (HAMO) and Low-Altitude Mapping Orbit (LAMO) phases at increased pixel resolution. In particular, bright and dark surface materials on Vesta, and pitted materials, are currently being investigated by the Dawn team. In this work we present temperature maps and emissivities of several local-scale features that were observed by Dawn under different illumination conditions and different local solar times. To calculate surface temperatures, we applied a Bayesian approach to nonlinear inversion based on the Kirchhoff law and the Planck function, and whose results were compared with those provided by the application of alternative methods. Data from the IR channel of VIR show that bright regions generally correspond to regions with lower thermal emission, i.e. lower temperature, while dark regions correspond to areas with higher thermal emission, i.e. higher temperature. This behavior confirms that many of the dark appearances in the VIS mainly reflect albedo variations, and not, for example, shadowing. During maximum daily insolation, dark features in the equatorial region may rise to temperatures greater than 270 K, while brightest features stop at roughly 258 K for similar local solar times. However, pitted

  1. Feature extraction for structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois [Los Alamos National Laboratory; Farrar, Charles [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Nishio, Mayuko [UNIV OF TOKYO; Worden, Keith [UNIV OF SHEFFIELD; Takeda, Nobuo [UNIV OF TOKYO

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  2. Modeling Course for Virtual University by Features

    Directory of Open Access Journals (Sweden)

    László Horváth

    2004-05-01

    Full Text Available Environments with large number of interrelated information uses several advanced concepts ascomputer description of different aspects of modeled objects in the form of feature basedmodels. In this case a set of features is defined then used for the purpose of modification of aninitial model to achieve a final model as a description of an instance of a well-defined complexobject from a real world environment. Utilization this approach and some relevant methodshave been investigated by the authors to establish course modeling in virtual universityenvironments. The main objective is definition generic model entities for courses and instancemodel entities for student course profiles. Course model entities describe virtual universityactivities. The modeling can be applied generally but it is being developed for the domain ofhigher education in virtual technologies. The paper introduces some virtual university relatedconcepts and the approach of the authors to virtual university. Following this feature drivenassociative model of virtual course developed by the authors is explained. Some issues aboutthe conceptualized application oriented virtual course features are discussed as a contributionto implementation of a virtual classroom model proposed by the authors. Finally, possibilitiesof integration of the university model with engineering modeling systems are discussed takinginto account present day virtual universities and possibilities to communicate with prospectivestudents both in professional design and home computer environments.

  3. Spray-Formed Tooling with Micro-Scale Features

    Energy Technology Data Exchange (ETDEWEB)

    Kevin McHugh

    2010-06-01

    Molds, dies, and related tooling are used to shape many of the plastic and metal components we use every day at home and work. Traditional mold-making practices are labor and capital equipment intensive, involving multiple machining, benching and heat treatment operations. Spray forming is an alternative method to manufacture molds and dies. The general concept is to atomize and deposit droplets of a tooling alloy onto a pattern to form a thick deposit while imaging the pattern’s shape, surface texture and details. Unlike conventional machining, this approach can be used to fabricate tooling with micro-scale surface features. This paper describes a research effort to spray form molds and dies that are used to image micro-scale surface textures into polymers. The goal of the study is to replicate textures that give rise to superhydrophobic behavior by mimicking the surface structure of highly water repellent biological materials such as the lotus leaf. Spray conditions leading to high transfer fidelity of features into the surface of molded polymers will be described. Improvements in water repellency of these materials was quantified by measuring the static contact angle of water droplets on flat and textured surfaces.

  4. Formation of coastline features by large-scale instabilities induced by high-angle waves.

    Science.gov (United States)

    Ashton, A; Murray, A B; Arnault, O

    2001-11-15

    Along shore sediment transport that is driven by waves is generally assumed to smooth a coastline. This assumption is valid for small angles between the wave crest lines and the shore, as has been demonstrated in shoreline models. But when the angle between the waves and the shoreline is sufficiently large, small perturbations to a straight shoreline will grow. Here we use a numerical model to investigate the implications of this instability mechanism for large-scale morphology over long timescales. Our simulations show growth of coastline perturbations that interact with each other to produce large-scale features that resemble various kinds of natural landforms, including the capes and cuspate forelands observed along the Carolina coast of southeastern North America. Wind and wave data from this area support our hypothesis that such an instability mechanism could be responsible for the formation of shoreline features at spatial scales up to hundreds of kilometres and temporal scales up to millennia.

  5. Interactive Inconsistency Fixing in Feature Modeling

    Institute of Scientific and Technical Information of China (English)

    王波; 熊英飞; 胡振江; 赵海燕; 张伟; 梅宏

    2014-01-01

    Feature models have been widely adopted to reuse the requirements of a set of similar products in a domain. In feature models’ construction, one basic task is to ensure the consistency of feature models, which often involves detecting and fixing of inconsistencies in feature models. While many approaches have been proposed, most of them focus on detecting inconsistencies rather than fixing inconsistencies. In this paper, we propose a novel dynamic-priority based approach to interactively fixing inconsistencies in feature models, and report an implementation of a system that not only automatically recommends a solution to fixing inconsistencies but also supports domain analysts to gradually reach the desirable solution by dynamically adjusting priorities of constraints. The key technical contribution is, as far as we are aware, the first application of the constraint hierarchy theory to feature modeling, where the degree of domain analysts’ confidence on constraints is expressed by using priority and inconsistencies are resolved by deleting one or more lower-priority constraints. Two case studies demonstrate the usability and scalability (effciency) of our new approach.

  6. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  7. Microstructural features of composite whey protein/polysaccharide gels characterized at different length scales

    NARCIS (Netherlands)

    Berg, van den L.; Rosenberg, Y.; Boekel, van M.A.J.S.; Rosenberg, M.; Velde, van de F.

    2009-01-01

    Mixed biopolymer gels are often used to model semi-solid food products. Understanding of their functional properties requires knowledge about structural elements composing these systems at various length scales. This study has been focused on investigating the structural features of mixed cold-set g

  8. Microstructural features of composite whey protein/polysaccharide gels characterized at different length scales

    NARCIS (Netherlands)

    Berg, van den L.; Rosenberg, Y.; Boekel, van M.A.J.S.; Rosenberg, M.; Velde, van de F.

    2009-01-01

    Mixed biopolymer gels are often used to model semi-solid food products. Understanding of their functional properties requires knowledge about structural elements composing these systems at various length scales. This study has been focused on investigating the structural features of mixed cold-set g

  9. Clafer: Unifying Class and Feature Modeling

    DEFF Research Database (Denmark)

    Bąk, Kacper; Diskin, Zinovy; Antkiewicz, Michal;

    2015-01-01

    of hierarchical models whereby properties can be arbitrarily nested in the presence of inheritance and feature modeling constructs. The semantics also enables building consistent automated reasoning support for the language: To date, we implemented three reasoners for Clafer based on Alloy, Z3 SMT, and Choco3 CSP...

  10. Scale Invariant Feature Transform Based Fingerprint Corepoint Detection

    Directory of Open Access Journals (Sweden)

    Madasu Hanmandlu

    2013-07-01

    Full Text Available The detection of singular points (core and delta accurately and reliably is very important for classification and matching of fingerprints. This paper presents a new approach for core point detection based on scale invariant feature transform (SIFT. Firstly, SIFT points are extracted ,then reliability and ridge frequency criteria are applied to reduce the candidate points required to make a decision on the core point. Finally a suitable mask is applied to detect an accurate core point. Experiments on FVC2002 and FVC2004 databases show that our approach locates a unique reference point with high accuracy. Results of our approach are compared with those of the existing methods in terms of accuracy of core point detection.Defence Science Journal, 2013, 63(4, pp.402-407, DOI:http://dx.doi.org/10.14429/dsj.63.2708

  11. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... to a state-of-the-art method for cartilage segmentation using one stage nearest neighbour classifier. Our method achieved better results than the state-of-the-art method for tibial as well as femoral cartilage segmentation. The next main contribution of the thesis deals with learning features autonomously...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...

  12. Multi-scale Representation of Building Feature in Urban GIS

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper aims at multi-scale representation of urban GIS,presenting a model to dynamically generalize the building on the basis of Delaunay triangulation model.Considering the constraints of position accuracy,statistical area balance and orthogonal characteristics in building cluster generalization,this paper gives a progressive algorithm of building cluster aggregation,including conflict detection (where),object (who) displacement,and geometrical combination operation (how).The algorithm has been realized in an interactive generalization system and some experiment illustrations are provided.

  13. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  14. REVERSE MODELING FOR CONIC BLENDING FEATURE

    Institute of Scientific and Technical Information of China (English)

    Fan Shuqian; Ke Yinglin

    2005-01-01

    A novel method to extract conic blending feature in reverse engineering is presented.Different from the methods to recover constant and variable radius blends from unorganized points, it contains not only novel segmentation and feature recognition techniques, but also bias corrected technique to capture more reliable distribution of feature parameters along the spine curve. The segmentation depending on point classification separates the points in the conic blend region from the input point cloud. The available feature parameters of the cross-sectional curves are extracted with the processes of slicing point clouds with planes, conic curve fitting, and parameters estimation and compensation. The extracted parameters and its distribution laws are refined according to statistic theory such as regression analysis and hypothesis test. The proposed method can accurately capture the original design intentions and conveniently guide the reverse modeling process. Application examples are presented to verify the high precision and stability of the proposed method.

  15. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  16. Features and heterogeneities in growing network models

    Science.gov (United States)

    Ferretti, Luca; Cortelezzi, Michele; Yang, Bin; Marmorini, Giacomo; Bianconi, Ginestra

    2012-06-01

    Many complex networks from the World Wide Web to biological networks grow taking into account the heterogeneous features of the nodes. The feature of a node might be a discrete quantity such as a classification of a URL document such as personal page, thematic website, news, blog, search engine, social network, etc., or the classification of a gene in a functional module. Moreover the feature of a node can be a continuous variable such as the position of a node in the embedding space. In order to account for these properties, in this paper we provide a generalization of growing network models with preferential attachment that includes the effect of heterogeneous features of the nodes. The main effect of heterogeneity is the emergence of an “effective fitness” for each class of nodes, determining the rate at which nodes acquire new links. The degree distribution exhibits a multiscaling behavior analogous to the the fitness model. This property is robust with respect to variations in the model, as long as links are assigned through effective preferential attachment. Beyond the degree distribution, in this paper we give a full characterization of the other relevant properties of the model. We evaluate the clustering coefficient and show that it disappears for large network size, a property shared with the Barabási-Albert model. Negative degree correlations are also present in this class of models, along with nontrivial mixing patterns among features. We therefore conclude that both small clustering coefficients and disassortative mixing are outcomes of the preferential attachment mechanism in general growing networks.

  17. Model atmospheres - Tool for identifying interstellar features

    Science.gov (United States)

    Frisch, P. C.; Slojkowski, S. E.; Rodriguez-Bell, T.; York, D.

    1993-01-01

    Model atmosphere parameters are derived for 14 early A stars with rotation velocities, from optical spectra, in excess of 80 km/s. The models are compared with IUE observations of the stars in regions where interstellar lines are expected. In general, with the assumption of solar abundances, excellent fits are obtained in regions longward of 2580 A, and accurate interstellar equivalent widths can be derived using models to establish the continuum. The fits are poorer at shorter wavelengths, particularly at 2026-2062 A, where the stellar model parameters seem inadequate. Features indicating mass flows are evident in stars with known infrared excesses. In gamma TrA, variability in the Mg II lines is seen over the 5-year interval of these data, and also over timescales as short as 26 days. The present technique should be useful in systematic studies of episodic mass flows in A stars and for stellar abundance studies, as well as interstellar features.

  18. Some features of the small-scale solar wind fluctuations

    Science.gov (United States)

    Zastenker, G.; Eiges, P.; Avanov, L.; Astafyeva, N.; Zurbuchen, Th.; Bochsler, P.

    1995-06-01

    We have investigated small-scale variations of the solar wind ion flux measured with Faraday cups onboard the Prognoz-8 satellite. These measurements have a high time resolution of 1.24 seconds for intervals with a duration of several hours and as high as 0.02 seconds for some periods of about 1 hour duration. The main goal of this work is the determination of the quantitative features of fast ion flux fluctuations using mainly spectral analysis but also other methods. We also identify their association with interplanetary plasma parameters. Particularly, it is shown that the slope of the power spectra in the frequency range from 1E-4 to 6E-2 Hz is close to the classical Kolmogorov (-5/3) law. We also discuss some intervals with a very high level of the relative amplitude of flux fluctuations (10-20 percent) which were observed near the Earth's bow shock in the foreshock region. The use of the wavelet method for the long time series allows us to study the temporal evolution of power spectra.

  19. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter;

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  20. Features and heterogeneities in growing network models

    CERN Document Server

    Ferretti, Luca; Yang, Bin; Marmorini, Giacomo; Bianconi, Ginestra

    2011-01-01

    Many complex networks from the World-Wide-Web to biological networks are growing taking into account the heterogeneous features of the nodes. The feature of a node might be a discrete quantity such as a classification of a URL document as personal page, thematic website, news, blog, search engine, social network, ect. or the classification of a gene in a functional module. Moreover the feature of a node can be a continuous variable such as the position of a node in the embedding space. In order to account for these properties, in this paper we provide a generalization of growing network models with preferential attachment that includes the effect of heterogeneous features of the nodes. The main effect of heterogeneity is the emergence of an "effective fitness" for each class of nodes, determining the rate at which nodes acquire new links. Beyond the degree distribution, in this paper we give a full characterization of the other relevant properties of the model. We evaluate the clustering coefficient and show ...

  1. Robot Vision System for Coordinate Measurement of Feature Points on Large Scale Automobile Part

    Institute of Scientific and Technical Information of China (English)

    Pongsak Joompolpong; Pradit Mittrapiyanuruk; Pakorn Keawtrakulpong

    2016-01-01

    In this paper, we present a robot vision based system for coordinate measurement of feature points on large scale automobile parts. Our system consists of an industrial 6-DOF robot mounted with a CCD camera and a PC. The system controls the robot into the area of feature points. The images of measuring feature points are acquired by the camera mounted on the robot. 3D positions of the feature points are obtained from a model based pose estimation that applies to the images. The measured positions of all feature points are then transformed to the reference coordinate of feature points whose positions are obtained from the coordinate measuring machine (CMM). Finally, the point-to-point distances between the measured feature points and the reference feature points are calculated and reported. The results show that the root mean square error (RMSE) of measure values obtained by our system is less than 0.5mm. Our system is adequate for automobile assembly and can perform faster than conventional methods.

  2. Aeroheating model advancements featuring electroless metallic plating

    Science.gov (United States)

    Stalmach, C. J., Jr.; Goodrich, W. D.

    1976-01-01

    Discussed are advancements in wind tunnel model construction methods and hypersonic test data demonstrating the methods. The general objective was to develop model fabrication methods for improved heat transfer measuring capability at less model cost. A plated slab model approach was evaluated with cast models containing constantan wires that formed single-wire-to-plate surface thermocouple junctions with a seamless skin of electroless nickel alloy. The surface of a space shuttle orbiter model was selectively plated with scaled tiles to simulate, with high fidelity, the probable misalignments of the heatshield tiles on a flight vehicle. Initial, Mach 8 heating results indicated a minor effect of tile misalignment roughness on boundary layer transition, implying a possible relaxation of heatshield manufacturing tolerances. Some loss of the plated tiles was experienced when the model was tested at high heating rates.

  3. Multi-Scale Analysis Based Curve Feature Extraction in Reverse Engineering

    Institute of Scientific and Technical Information of China (English)

    YANG Hongjuan; ZHOU Yiqi; CHEN Chengjun; ZHAO Zhengxu

    2006-01-01

    A sectional curve feature extraction algorithm based on multi-scale analysis is proposed for reverse engineering. The algorithm consists of two parts: feature segmentation and feature classification. In the first part, curvature scale space is applied to multi-scale analysis and original feature detection. To obtain the primary and secondary curve primitives, feature fusion is realized by multi-scale feature detection information transmission. In the second part: projection height function is presented based on the area of quadrilateral, which improved criterions of sectional curve feature classification. Results of synthetic curves and practical scanned sectional curves are given to illustrate the efficiency of the proposed algorithm on feature extraction. The consistence between feature extraction based on multi-scale curvature analysis and curve primitives is verified.

  4. Feature Matching in Time Series Modelling

    CERN Document Server

    Xia, Yingcun

    2011-01-01

    Using a time series model to mimic an observed time series has a long history. However, with regard to this objective, conventional estimation methods for discrete-time dynamical models are frequently found to be wanting. In the absence of a true model, we prefer an alternative approach to conventional model fitting that typically involves one-step-ahead prediction errors. Our primary aim is to match the joint probability distribution of the observable time series, including long-term features of the dynamics that underpin the data, such as cycles, long memory and others, rather than short-term prediction. For want of a better name, we call this specific aim {\\it feature matching}. The challenges of model mis-specification, measurement errors and the scarcity of data are forever present in real time series modelling. In this paper, by synthesizing earlier attempts into an extended-likelihood, we develop a systematic approach to empirical time series analysis to address these challenges and to aim at achieving...

  5. Hierarchical Geometric Constraint Model for Parametric Feature Based Modeling

    Institute of Scientific and Technical Information of China (English)

    高曙明; 彭群生

    1997-01-01

    A new geometric constraint model is described,which is hierarchical and suitable for parametric feature based modeling.In this model,different levels of geometric information are repesented to support various stages of a design process.An efficient approach to parametric feature based modeling is also presented,adopting the high level geometric constraint model.The low level geometric model such as B-reps can be derived automatically from the hig level geometric constraint model,enabling designers to perform their task of detailed design.

  6. Spatial structure and scale feature of the atmospheric pollution source impact of city agglomeration

    Institute of Scientific and Technical Information of China (English)

    XU; Xiangde; ZHOU; Xiuji; SHI; Xiaohui

    2005-01-01

    The spatial structure and multi-scale feature of the atmospheric pollution influence domain of Beijing and its peripheral areas (a rapidly developed city agglomeration) is dissected and analyzed in this paper on the basis of the atmospheric pollution dynamic-chemical process observation data of the urban building ensemble boundary layer of the Beijing City Air Pollution Observation Experiment (BECAPEX) in winter (February) and summer (August) 2003, and relevant meteorological elements and satellite retrieval aerosol optical depth (AOD), etc. comprehensive data with the dynamic-statistical integrated analysis of "point-surface" spatial structure. Results show that there existed significant difference in the contribution of winter/summer different pollution emission sources to the component character of atmospheric pollution, and the principal component analysis (PCA) results of statistical model also indicate that SO2 and NOX dominated in the component structure of winter aerosol particle; instead, CO and NOX dominated in summer. Surface layer atmospheric dynamic and thermal structures and various pollutant species at the upper boundary of building ensembles at urban different observational sites of Beijing in winter and summer showed an "in-phase" variation and its spatial scale feature of "influence domain". The power spectrum analysis (PSA) shows that the period spectrum of winter/summer particle concentration accorded with those of atmospheric wind field: the longer period was dominative in winter, but the shorter period in summer, revealing the impact of the seasonal scale feature of winter/summer atmospheric general circulation on the period of atmospheric pollution variations. It is found that from analyzing urban area thermal heterogeneity that the multiscale effect of Beijing region urban heat island (UHI) was associated with the heterogeneous expansion of tall buildings area. In urban atmospheric dynamical and thermal characteristic spatial structures, the

  7. Functional Scaling of Musculoskeletal Models

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark;

    The validity of the predictions from musculoskeletal models depends largely on how well the morphology of the model matches that of the patient. To address this problem, we present a novel method to scale a cadaver-based musculoskeletal model to match both the segment lengths and joint parameters...... orientations are then used to morph/scale a cadaver based musculoskeletal model using a set of radial basis functions (RBFs). Using the functional joint axes to scale musculoskeletal models provides a better fit to the marker data, and allows for representation of patients with considerable difference in bone...... geometry, without the need for MR/CT scans. However, more validation activities are needed to better understand the effect of morphing musculoskeletal models based on functional joint parameters....

  8. Recognizing objects in 3D point clouds with multi-scale local features.

    Science.gov (United States)

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-12-15

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.

  9. Enhanced HMAX model with feedforward feature learning for multiclass categorization

    Directory of Open Access Journals (Sweden)

    Yinlin eLi

    2015-10-01

    Full Text Available In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 milliseconds of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: 1 To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; 2 To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; 3 Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  10. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  11. Dynamic features analysis for the large-scale logistics system warehouse-out operation

    Science.gov (United States)

    Yao, Can-Zhong; Lin, Ji-Nan; Liu, Xiao-Feng; Zheng, Xu-Zhou

    2014-12-01

    In the paper, we research on the behavior dynamics for the large-scale logistics system warehouse-out operation systematically. First, we discover that steel products warehouse-out of different warehouses in a large-scale logistics system can be characterized by burst, and the warehouse-out inter-event time follows the power-law distribution with exponents close to α=2.5, which differs from the two classical models proposed by Barabasi (2005) and Vazquez (2005) respectively. By analyzing the warehouse-out inter-event time distribution of the products in one certain large-scale logistics system, we further discuss burst features and mechanisms of logistics system. Additionally, we find that in population behaviors, burst features can be explained by the priority that rooted in holidays and interior task scheduling. However, warehouse-out behaviors of active individuals do not show any features of burst. Further, we find that warehouse-out quantity of steel products follows Fractal Brownian motion with the HURST exponent higher than 0.5 by means of R/S, which infers that the quantity of products in a logistics system is not only guided by prices in the present market, but also related closely to the previous quantity of warehouse-out. Based on V statistic, we compare memory length of different products in warehouses. Finally, we apply complex networks visibility graphs for further validation of fractal features in a logistics system and find that almost every visibility graph exhibits small-world and scale-free features. Both R/S and complex networks visibility graphs reinforce that the warehouse-out quantity of products in a logistics system is not a random walk process, but contains intrinsic regularities and long-term correlation between present and previous warehouse-out quantity.

  12. Scaling model for symmetric star polymers

    Science.gov (United States)

    Ramachandran, Ram; Rai, Durgesh K.; Beaucage, Gregory

    2010-03-01

    Neutron scattering data from symmetric star polymers with six poly (urethane-ether) arms, chemically bonded to a C-60 molecule are fitted using a new scaling model and scattering function. The new scaling function can describe both good solvent and theta solvent conditions as well as resolve deviations in chain conformation due to steric interactions between star arms. The scaling model quantifies the distinction between invariant topological features for this star polymer and chain tortuosity which changes with goodness of solvent and steric interaction. Beaucage G, Phys. Rev. E 70 031401 (2004).; Ramachandran R, et al. Macromolecules 41 9802-9806 (2008).; Ramachandran R, et al. Macromolecules, 42 4746-4750 (2009); Rai DK et al. Europhys. Lett., (Submitted 10/2009).

  13. New approach to spectral features modeling

    NARCIS (Netherlands)

    Brug, H. van; Scalia, P.S.

    2012-01-01

    The origin of spectral features, speckle effects, is explained, followed by a discussion on many aspects of spectral features generation. The next part gives an overview of means to limit the amplitude of the spectral features. This paper gives a discussion of all means to reduce the spectral featur

  14. On multi-scale representations of geographic features

    Institute of Scientific and Technical Information of China (English)

    WANG Yanhui; LI Xiaojuan; GONG Huili

    2006-01-01

    This paper contains a review of the development of research on multiple representations compiled from Geographic Information Systems (GIS), including data structure, formalization and storage, and intelligent zoom. A summary is also included of the problems of interconnectivity, consistency maintenance, dynamic query and coexisting updates, as well as a research review of multi-scale databases and related studies. Finally,research directions and foci are proposed for the future design and implementation of multi-scale GIS.

  15. Building high-level features using large scale unsupervised learning

    CERN Document Server

    Le, Quoc V; Devin, Matthieu; Corrado, Greg; Chen, Kai; Ranzato, Marc'Aurelio; Dean, Jeff; Ng, Andrew Y

    2011-01-01

    We consider the problem of building detectors for high-level concepts using only unsupervised feature learning. For example, we would like to understand if it is possible to learn a face detector using only unlabeled images downloaded from the internet. To answer this question, we trained a simple feature learning algorithm on a large dataset of images (10 million images, each image is 200x200). The simulation is performed on a cluster of 1000 machines with fast network hardware for one week. Extensive experimental results reveal surprising evidence that such high-level concepts can indeed be learned using only unlabeled data and a simple learning algorithm.

  16. Mosaic of the Curved Human Retinal Images Based on the Scale-Invariant Feature Transform

    Institute of Scientific and Technical Information of China (English)

    LI Ju-peng; CHEN Hou-jin; ZHANG Xin-yuan; YAO Chang

    2008-01-01

    .To meet the needs in the fundus examination, including outlook widening, pathology tracking, etc., this paper describes a robust feature-based method for fully-automatic mosaic of the curved human retinal images photographed by a fundus microscope. The kernel of this new algorithm is the scale-, rotation-and illumination-invariant interest point detector & feature descriptor-Scale-Invariant Feature Transform. When matched interest points according to second-nearest-neighbor strategy, the parameters of the model are estimated using the correct matches of the interest points,extracted by a new inlier identification scheme based on Sampson distance from putative sets. In order to preserve image features, bilinear warping and multi-band blending techniques are used to create panoramic retinal images. Experiments show that the proposed method works well with rejection error in 0.3 pixels, even for those cases where the retinal images without discernable vascular structure in contrast to the state-of-the-art algorithms.

  17. Fast multi-scale feature fusion for ECG heartbeat classification

    Science.gov (United States)

    Ai, Danni; Yang, Jian; Wang, Zeyu; Fan, Jingfan; Ai, Changbin; Wang, Yongtian

    2015-12-01

    Electrocardiogram (ECG) is conducted to monitor the electrical activity of the heart by presenting small amplitude and duration signals; as a result, hidden information present in ECG data is difficult to determine. However, this concealed information can be used to detect abnormalities. In our study, a fast feature-fusion method of ECG heartbeat classification based on multi-linear subspace learning is proposed. The method consists of four stages. First, baseline and high frequencies are removed to segment heartbeat. Second, as an extension of wavelets, wavelet-packet decomposition is conducted to extract features. With wavelet-packet decomposition, good time and frequency resolutions can be provided simultaneously. Third, decomposed confidences are arranged as a two-way tensor, in which feature fusion is directly implemented with generalized N dimensional ICA (GND-ICA). In this method, co-relationship among different data information is considered, and disadvantages of dimensionality are prevented; this method can also be used to reduce computing compared with linear subspace-learning methods (PCA). Finally, support vector machine (SVM) is considered as a classifier in heartbeat classification. In this study, ECG records are obtained from the MIT-BIT arrhythmia database. Four main heartbeat classes are used to examine the proposed algorithm. Based on the results of five measurements, sensitivity, positive predictivity, accuracy, average accuracy, and t-test, our conclusion is that a GND-ICA-based strategy can be used to provide enhanced ECG heartbeat classification. Furthermore, large redundant features are eliminated, and classification time is reduced.

  18. Multi-scale contrast enhancement of oriented features in 2D images using directional morphology

    Science.gov (United States)

    Das, Debashis; Mukhopadhyay, Susanta; Praveen, S. R. Sai

    2017-01-01

    This paper presents a multi-scale contrast enhancement scheme for improving the visual quality of directional features present in 2D gray scale images. Directional morphological filters are employed to locate and extract the scale-specific image features with different orientations which are subsequently stored in a set of feature images. The final enhanced image is constructed by weighted combination of these feature images with the original image. While construction, the feature images corresponding to progressively smaller scales are made to have higher proportion of contribution through the use of progressively larger weights. The proposed method has been formulated, implemented and executed on a set of real 2D gray scale images with oriented features. The experimental results visually establish the efficacy of the method. The proposed method has been compared with other similar methods both on subjective and objective basis and the overall performance is found to be satisfactory.

  19. Landmine detection using discrete hidden Markov models with Gabor features

    Science.gov (United States)

    Frigui, Hichem; Missaoui, Oualid; Gader, Paul

    2007-04-01

    We propose a general method for detecting landmine signatures in vehicle mounted ground penetrating radar (GPR) using discrete hidden Markov models and Gabor wavelet features. Observation vectors are constructed based on the expansion of the signature's B-scan using a bank of scale and orientation selective Gabor filters. This expansion provides localized frequency description that gets encoded in the observation sequence. These observations do not impose an explicit structure on the mine model, and are used to naturally model the time-varying signatures produced by the interaction of the GPR and the landmines as the vehicle moves. The proposed method is evaluated on real data collected by a GPR mounted on a moving vehicle at three different geographical locations that include several lanes. The model parameters are optimized using the BaumWelch algorithm, and lane-based cross-validation, in which each mine lane is in turn treated as a test set with the rest of the lanes used for training, is used to train and test the model. Preliminary results show that observations encoded with Gabor wavelet features perform better than observation encoded with gradient-based edge features.

  20. An Active Model for Facial Feature Tracking

    Directory of Open Access Journals (Sweden)

    Jörgen Ahlberg

    2002-06-01

    Full Text Available We present a system for finding and tracking a face and extract global and local animation parameters from a video sequence. The system uses an initial colour processing step for finding a rough estimate of the position, size, and inplane rotation of the face, followed by a refinement step drived by an active model. The latter step refines the pre­vious estimate, and also extracts local animation parame­ters. The system is able to track the face and some facial features in near real-time, and can compress the result to a bitstream compliant to MPEG-4 face and body animation.

  1. Fishermen Follow Fine-scaled Physical Ocean Features For Finance

    Science.gov (United States)

    Fuller, E.; Watson, J. R.; Samhouri, J.; Castruccio, F. S.

    2016-12-01

    The seascapes on which many millions of people make their living and secure food have complex and dynamic spatial features - the figurative hills and valleys - that control where and how people work at sea. Here, we quantify the physical mosaic of the surface ocean by identifying Lagrangian Coherent Structures for a whole seascape - the California Current - and assess their impact on the spatial distribution of fishing. We show that there is a mixed response: some fisheries track these physical features, and others avoid them. This spatial behavior maps to economic impacts: we find that tuna fishermen can expect to make three times more revenue per trip if fishing occurs on strong coherent structures. These results highlight a connection between the physical state of the oceans, the spatial patterns of human activity and ultimately the economic prosperity of coastal communities.

  2. Innovations in individual feature history management - The significance of feature-based temporal model

    Science.gov (United States)

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  3. Scale-Scale Correlation as Discriminant Among the Biased Galaxy Formation Models

    Institute of Scientific and Technical Information of China (English)

    FENG Long-Long; XIANG Shou-Ping

    2001-01-01

    Using the mock galaxy catalogues created from the N-body simulations, various biasing prescriptions for modelling the relative distribution between the galaxies and the underlying dark matter are statistically tested by using scale-scale correlation. We found that the scale-scale correlation is capable of breaking the model degeneracy indicated by the low-order clustering statistics, and could be taken as an effective discriminant among a variety of biasing models. Particularly, comparing with the APM bright galaxy catalogue, we infer that the two parameter Lagrangian biasing model gives the best fit to the observed clustering features.

  4. Scaling Features of Multimode Motions in Coupled Chaotic Oscillators

    DEFF Research Database (Denmark)

    Pavlov, A.N.; Sosnovtseva, Olga; Mosekilde, Erik

    2003-01-01

    Two different methods (the WTMM- and DFA-approaches) are applied to investigate the scaling properties in the return-time sequences generated by a system of two coupled chaotic oscillators. Transitions from twomode asynchronous dynamics (torus or torus-Chaos) to different states of chaotic phase...... synchronization are found to significantly reduce the degree of multiscality. The influence of external noise on the possibility of distinguishing the various chaotic states is considered....

  5. A Novel DBN Feature Fusion Model for Cross-Corpus Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Zou Cairong

    2016-01-01

    Full Text Available The feature fusion from separate source is the current technical difficulties of cross-corpus speech emotion recognition. The purpose of this paper is to, based on Deep Belief Nets (DBN in Deep Learning, use the emotional information hiding in speech spectrum diagram (spectrogram as image features and then implement feature fusion with the traditional emotion features. First, based on the spectrogram analysis by STB/Itti model, the new spectrogram features are extracted from the color, the brightness, and the orientation, respectively; then using two alternative DBN models they fuse the traditional and the spectrogram features, which increase the scale of the feature subset and the characterization ability of emotion. Through the experiment on ABC database and Chinese corpora, the new feature subset compared with traditional speech emotion features, the recognition result on cross-corpus, distinctly advances by 8.8%. The method proposed provides a new idea for feature fusion of emotion recognition.

  6. On the scaling features of magnetic field fluctuations at non-MHD scales in turbulent space plasmas

    Science.gov (United States)

    Consolini, G.; Giannattasio, F.; Yordanova, E.; Vörös, Z.; Marcucci, M. F.; Echim, M.; Chang, T.

    2016-11-01

    In several different contexts space plasmas display intermittent turbulence at magneto-hydro-dynamic (MHD) scales, which manifests in anomalous scaling features of the structure functions of the magnetic field increments. Moving to smaller scales, i.e. below the ion-cyclotron and/or ion inertial length, these scaling features are still observed, even though its is not clear if these scaling features are still anomalous or not. Here, we investigate the nature of scaling properties of magnetic field increments at non-MHD scales for a period of fast solar wind to investigate the occurrence or not of multifractal features and collapsing of probability distribution functions (PDFs) using the novel Rank-Ordered Multifractal Analysis (ROMA) method, which is more sensitive than the traditional structure function approach. We find a strong evidence for the occurrence of a near mono-scaling behavior, which suggests that the observed turbulent regime at non-MHD scales mainly displays a mono-fractal nature of magnetic field increments. The results are discussed in terms of a non-compact fractal structure of the dissipation field.

  7. Five features for modelling augmented reality

    OpenAIRE

    Liang, Sha; Roast, Chris

    2014-01-01

    Augmented reality is growing rapidly and supports people in differ-ent fields such as education, design, navigation and medicine. However, there is limited discussion about the characteristic features of augmented reality and what is meant by the term. This paper presents five different features: changea-bility, synchronicity and instant, antecedent, partial one to one and hidden reali-ty. The explanation of each of these features is given follow a consistent struc-ture. The benefits of gener...

  8. Study on Isomerous CAD Model Exchange Based on Feature

    Institute of Scientific and Technical Information of China (English)

    SHAO Xiaodong; CHEN Feng; XU Chenguang

    2006-01-01

    A model-exchange method based on feature between isomerous CAD systems is put forward in this paper. In this method, CAD model information is accessed at both feature and geometry levels and converted according to standard feature operation. The feature information including feature tree, dimensions and constraints, which will be lost in traditional data conversion, as well as geometry are converted completely from source CAD system to destination one. So the transferred model can be edited through feature operation, which cannot be implemented by general model-exchange interface.

  9. Spatial structure and scale feature of the atmospheric pollution source impact of city agglomeration

    Institute of Scientific and Technical Information of China (English)

    XU Xiangde; ZHOU Xiuji; SHI Xiaohui

    2005-01-01

    The spatial structure and multi-scale feature of the atmospheric pollution influence domain of Beijing and its peripheral areas (a rapidly developed city agglomeration) is dissected and analyzed in this paper on the basis of the atmospheric pollution dynamic-chemical process observation data of the urban building ensemble boundary layer of the Beijing City Air Pollution Observation Experiment (BECAPEX) in winter (February) and summer (August) 2003, and relevant meteorological elements and satellite retrieval aerosol optical depth (AOD), etc. comprehensive data with the dynamic-statistical integrated analysis of "point-surface" spatial structure. Results show that there existed significant difference in the contribution of winter/summer different pollution emission sources to the component character of atmospheric pollution, and the principal component analysis (PCA) results of statistical model also indicate that SO2 and NOX dominated in the component structure of winter aerosol particle; instead, CO and NOX dominated in summer. Surface layer atmospheric dynamic and thermal structures and various pollutant species at the upper boundary of building ensembles at urban different observational sites of Beijing in winter and summer showed an "in-phase" variation and its spatial scale feature of "influence domain". The power spectrum analysis (PSA) shows that the period spectrum of winter/summer particle concentration accorded with those of atmospheric wind field: the longer period was dominative in winter, but the shorter period in summer, revealing the impact of the seasonal scale feature of winter/summer atmospheric general circulation on the period of atmospheric pollution variations. It is found that from analyzing urban area thermal heterogeneity that the multiscale effect of Beijing region urban heat island (UHI) was associated with the heterogeneous expansion of tall buildings area. In urban atmospheric dynamical and thermal characteristic spatial structures, the

  10. Global Deep Convection Models of Saturn's Atmospheric Features

    Science.gov (United States)

    Heimpel, Moritz; Cuff, Keith; Gastine, Thomas; Wicht, Johannes

    2016-04-01

    The Cassini mission, along with previous missions and ground-based observations, has revealed a rich variety of atmospheric phenomena and time variability on Saturn. Some examples of dynamical features are: zonal flows with multiple jet streams, turbulent tilted shear flows that seem to power the jets, the north polar hexagon, the south polar cyclone, large anticyclones in "storm alley", numerous convective storms (white spots) of various sizes, and the 2010/2011 great storm, which destroyed an array of vortices dubbed the "string of pearls". Here we use the anelastic dynamo code MagIC, in non-magnetic mode, to study rotating convection in a spherical shell. The thickness of the shell is set to approximate the depth of the low electrical conductivity deep atmosphere of Saturn, and the convective forcing is set to yield zonal flows of similar velocity (Rossby number) to those of Saturn. Internal heating and the outer entropy boundary conditions allow simple modelling of atmospheric layers with neutral stability or stable stratification. In these simulations we can identify several saturnian and jovian atmospheric features, with some variations. We find that large anticyclonic vortices tend to form in the first anticyclonic shear zones away from the equatorial jet. Cyclones form at the poles, and polar polygonal jet streams, comparable to Saturn's hexagon, may or may not form, depending on the model conditions. Strings of small scale vortical structures arise as convective plumes near boundaries of shear zones. They typically precede larger scale convective storms that spawn propagating shear flow disturbances and anticyclonic vortices, which tend to drift across anticyclonic shear zones, toward the equator (opposite the drift direction of Saturn's 2010/2011 storm). Our model results indicate that many identifiable dynamical atmospheric features seen on Jupiter and Saturn arise from deep convection, shaped by planetary rotation, underlying and interacting with stably

  11. Modeling agreement on bounded scales.

    Science.gov (United States)

    Vanbelle, Sophie; Lesaffre, Emmanuel

    2017-01-01

    Agreement is an important concept in medical and behavioral sciences, in particular in clinical decision making where disagreements possibly imply a different patient management. The concordance correlation coefficient is an appropriate measure to quantify agreement between two scorers on a quantitative scale. However, this measure is based on the first two moments, which could poorly summarize the shape of the score distribution on bounded scales. Bounded outcome scores are common in medical and behavioral sciences. Typical examples are scores obtained on visual analog scales and scores derived as the number of positive items on a questionnaire. These kinds of scores often show a non-standard distribution, like a J- or U-shape, questioning the usefulness of the concordance correlation coefficient as agreement measure. The logit-normal distribution has shown to be successful in modeling bounded outcome scores of two types: (1) when the bounded score is a coarsened version of a latent score with a logit-normal distribution on the [0,1] interval and (2) when the bounded score is a proportion with the true probability having a logit-normal distribution. In the present work, a model-based approach, based on a bivariate generalization of the logit-normal distribution, is developed in a Bayesian framework to assess the agreement on bounded scales. This method permits to directly study the impact of predictors on the concordance correlation coefficient and can be simply implemented in standard Bayesian softwares, like JAGS and WinBUGS. The performances of the new method are compared to the classical approach using simulations. Finally, the methodology is used in two different medical domains: cardiology and rheumatology.

  12. Models and average properties of scale-free directed networks

    Science.gov (United States)

    Bernhardsson, Sebastian; Minnhagen, Petter

    2006-08-01

    We extend the merging model for undirected networks by Kim [Eur. Phys. J. B 43, 369 (2004)] to directed networks and investigate the emerging scale-free networks. Two versions of the directed merging model, friendly and hostile merging, give rise to two distinct network types. We uncover that some nontrivial features of these two network types resemble two levels of a certain randomization/nonspecificity in the link reshuffling during network evolution. Furthermore, the same features show up, respectively, in metabolic networks and transcriptional networks. We introduce measures that single out the distinguishing features between the two prototype networks, as well as point out features that are beyond the prototypes.

  13. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  14. Image Watermarking Using Visual Perception Model and Statistical Features

    Directory of Open Access Journals (Sweden)

    Mrs.C.Akila

    2010-06-01

    Full Text Available This paper presents an effective method for the image watermarking using visual perception model based on statistical features in the low frequency domain. In the image watermarking community watermark resistance to geometric attacks is an important issue. Most countermeasures proposed in the literature usually focus on the problem of global affine transforms such as rotation, scaling and translation (RST, but few are resistant to challenging cropping and random bending attacks (RBAs. Normally in the case of watermarking there may be an occurrence of distortion in the form of artifacts. A visual perception model is proposed to quantify the localized tolerance to noise for arbitrary imagery which achieves the reduction of artifacts. As a result, the watermarking system provides a satisfactory performance for those content-preserving geometric deformations and image processing operations, including JPEG ompression, low pass filtering, cropping and RBAs.

  15. CONVERSE REASONING FOR FULL DEPRESSION-FEATURE MODEL AND PROCESS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new approach, namely, "defining protrusion-feature with depression-parameter" is advanced, which focuses on the shortcomings of protrusion-feature alteration method; The full depression-feature model is built up, and a basic converse reasoning iterative algorithm for machining process is given.The detailed examination has been implemented on the feature-based modeling system for light industry product (QJFMS) and the converse reasoning on fixture-based machining process is achieved.

  16. Thermal Behaviour of Unusual Local-Scale Surface Features on Vesta

    Science.gov (United States)

    Tosi, F.; Capria, M. T.; De Sanctis, M. C.; Palomba, E.; Grassi, D.; Capaccioni, F.; Ammannito, E.; Combe, J.-Ph.; Sunshine, J. M.; McCord, T. B.; Titus, T. N.; Russell, C. T.; Raymond, C. A.; Mittlefehldt, D. W.; Toplis, M. J.; Forni, O.; Sykes, M. V.

    2012-01-01

    On Vesta, the region of the infrared spectrum beyond approximately 3.5 micrometers is dominated by the thermal emission of the asteroid's surface, which can be used to determine surface temperature by means of temperature-retrieval algorithms. The thermal behavior of areas of unusual albedo seen at the local scale can be related to physical properties that can provide information about the origin of those materials. Dawn's Visible and Infrared Mapping Spectrometer (VIR) hyperspectral cubes are used to retrieve surface temperatures, with high accuracy as long as temperatures are greater than 180 K. Data acquired in the Survey phase (23 July through 29 August 2011) show several unusual surface features: 1) high-albedo (bright) and low-albedo (dark) material deposits, 2) spectrally distinct ejecta, 3) regions suggesting finer-grained materials. Some of the unusual dark and bright features were re-observed by VIR in the subsequent High-Altitude Mapping Orbit (HAMO) and Low-Altitude Mapping Orbit (LAMO) phases at increased pixel resolution. To calculate surface temperatures, we applied a Bayesian approach to nonlinear inversion based on the Kirchhoff law and the Planck function. These results were cross-checked through application of alternative methods. Here we present temperature maps of several local-scale features that were observed by Dawn under different illumination conditions and different local solar times. Some bright terrains have an overall albedo in the visible as much as 40% brighter than surrounding areas. Data from the IR channel of VIR show that bright regions generally correspond to regions with lower thermal emission, i.e. lower temperature, while dark regions correspond to areas with higher thermal emission, i.e. higher temperature. This behavior confirms that many of the dark appearances in the VIS mainly reflect albedo variations. In particular, it is shown that during maximum daily insolation, dark features in the equatorial region may rise to

  17. Features in the Standard Model diphoton background

    CERN Document Server

    Bondarenko, Kyrylo; Ruchayskiy, Oleg; Shaposhnikov, Mikhail

    2016-01-01

    We argue that electromagnetic decays of energetic unflavoured neutral mesons, notably $\\eta$, mis-identified as single photons due to granularity of the electromagnetic calorimeter might create bump-like features in the diphoton invariant mass spectrum at different energies, including 750 GeV. We discuss what kind of additional analysis can exclude or confirm this hypothesis.

  18. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...

  19. Online fringe projection profilometry based on scale-invariant feature transform

    Science.gov (United States)

    Li, Hongru; Feng, Guoying; Yang, Peng; Wang, Zhaomin; Zhou, Shouhuan; Asundi, Anand

    2016-08-01

    An online fringe projection profilometry (OFPP) based on scale-invariant feature transform (SIFT) is proposed. Both rotary and linear models are discussed. First, the captured images are enhanced by "retinex" theory for better contrast and an improved reprojection technique is carried out to rectify pixel size while keeping the right aspect ratio. Then the SIFT algorithm with random sample consensus algorithm is used to match feature points between frames. In this process, quick response code is innovatively adopted as a feature pattern as well as object modulation. The characteristic parameters, which include rotation angle in rotary OFPP and rectilinear displacement in linear OFPP, are calculated by a vector-based solution. Moreover, a statistical filter is applied to obtain more accurate values. The equivalent aligned fringe patterns are then extracted from each frame. The equal step algorithm, advanced iterative algorithm, and principal component analysis are eligible for phase retrieval according to whether the object moving direction accords with the fringe direction or not. The three-dimensional profile of the moving object can finally be reconstructed. Numerical simulations and experimental results verified the validity and feasibility of the proposed method.

  20. Noncircular features in Saturn's rings IV: Absolute radius scale and Saturn's pole direction

    Science.gov (United States)

    French, Richard G.; McGhee-French, Colleen A.; Lonergan, Katherine; Sepersky, Talia; Jacobson, Robert A.; Nicholson, Philip D.; Hedman, Mathew M.; Marouf, Essam A.; Colwell, Joshua E.

    2017-07-01

    We present a comprehensive solution for the geometry of Saturn's ring system, based on orbital fits to an extensive set of occultation observations of 122 individual ring edges and gaps. We begin with a restricted set of very high quality Cassini VIMS, UVIS, and RSS measurements for quasi-circular features in the C and B rings and the Cassini Division, and then successively add suitably weighted additional Cassini and historical occultation measurements (from Voyager, HST and the widely-observed 28 Sgr occultation of 3 Jul 1989) for additional non-circular features, to derive an absolute radius scale applicable across the entire classical ring system. As part of our adopted solution, we determine first-order corrections to the spacecraft trajectories used to determine the geometry of individual occultation chords. We adopt a simple linear model for Saturn's precession, and our favored solution yields a precession rate on the sky n^˙P = 0.207 ± 0 .006‧‧yr-1 , equivalent to an angular rate of polar motion ΩP = 0.451 ± 0 .014‧‧yr-1 . The 3% formal uncertainty in the fitted precession rate is approaching the point where it can provide a useful constraint on models of Saturn's interior, although realistic errors are likely to be larger, given the linear approximation of the precession model and possible unmodeled systematic errors in the spacecraft ephemerides. Our results are largely consistent with independent estimates of the precession rate based on historical RPX times (Nicholson et al., 1999 AAS/Division for Planetary Sciences Meeting Abstracts #31 31, 44.01) and from theoretical expectations that account for Titan's 700-yr precession period (Vienne and Duriez 1992, Astronomy and Astrophysics 257, 331-352). The fitted precession rate based on Cassini data only is somewhat lower, which may be an indication of unmodeled shorter term contributions to Saturn's polar motion from other satellites, or perhaps the result of inconsistencies in the assumed

  1. CONSTRUCTION AND MODIFICATION OF FLEXIBLE FEATURE-BASED MODELS

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A new approach is proposed to generate flexible featrure-based models (FFBM), which can be modified dynamically. BRep/CSFG/FRG hybrid scheme is used to describe FFBM, in which BRep explicitly defines the model, CSFG (Constructive solid-feature geometry) tree records the feature-based modelling procedure and FRG (Feature relation graph) reflects different knids of relationship among features. Topological operators with local retrievability are designed to implement feature addition, which is traced by topological operation list (TOL) in detail. As a result, FFBM can be modified directly in the system database. Related features' chain reactions and variable topologies are supported in design modification, after which the product information adhering on features will not be lost. Further, a feature can be modified as rapidly as it was added.

  2. On the Use of Memory Models in Audio Features

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual spectral flux...

  3. Brane World Models Need Low String Scale

    CERN Document Server

    Antoniadis, Ignatios; Calmet, Xavier

    2011-01-01

    Models with large extra dimensions offer the possibility of the Planck scale being of order the electroweak scale, thus alleviating the gauge hierarchy problem. We show that these models suffer from a breakdown of unitarity at around three quarters of the low effective Planck scale. An obvious candidate to fix the unitarity problem is string theory. We therefore argue that it is necessary for the string scale to appear below the effective Planck scale and that the first signature of such models would be string resonances. We further translate experimental bounds on the string scale into bounds on the effective Planck scale.

  4. Thermal Correlators in Holographic Models with Lifshitz scaling

    CERN Document Server

    Keranen, Ville

    2012-01-01

    We study finite temperature effects in two distinct holographic models that exhibit Lifshitz scaling, looking to identify model independent features in the dual strong coupling physics. We consider the thermodynamics of black branes and find different low-temperature behavior of the specific heat. Deformation away from criticality leads to non-trivial temperature dependence of correlation functions and we study how the characteristic length scale in the two point function of scalar operators varies as a function of temperature and deformation parameters.

  5. Three-dimensional object tracking based on perspective scale invariant feature transform correspondences

    Science.gov (United States)

    Chen, Wei; Liang, Luming; Zhao, Yuelong; Chen, Shu

    2017-05-01

    Reconstructing three-dimensional (3-D) poses from matched feature correspondences is widely used in 3-D object tracking. The precision of correspondence matching plays a major role in the pose reconstruction. Without prior knowledge of the perspective camera model, state-of-the-art methods only deal with two-dimensional (2-D) planar affine transforms. An interest point's detector and descriptor [perspective scale invariant feature transform (SIFT)] is proposed to overcome the side effects of viewpoint changing, i.e., our detector is invariant to viewpoint changing. Perspective SIFT is detected by the SIFT approach, where the sample region is determined by projecting the original sample region to the image plane based on the established camera model. An iterative algorithm then modifies the pose of the tracked object and it generally converges to a 3-D perspective invariant point. The pose of the tracked object is finally estimated by the combination of template warping and perspective SIFT correspondences. Thorough evaluations are performed on two public databases, the Biwi Head Pose dataset and the Boston University dataset. Comparisons illustrate that the proposed keypoint's detector largely improves the tracking performance.

  6. Large-scale assessment of activity landscape feature probabilities of bioactive compounds.

    Science.gov (United States)

    Kayastha, Shilva; Dimova, Dilyana; Iyer, Preeti; Vogt, Martin; Bajorath, Jürgen

    2014-02-24

    Activity landscape representations integrate pairwise compound similarity and potency relationships and provide direct access to characteristic structure-activity relationship features in compound data sets. Because pairwise compound comparisons provide the foundation of activity landscape design, the assessment of specific landscape features such as activity cliffs has generally been confined to the level of compound pairs. A conditional probability-based approach has been applied herein to assign most probable activity landscape features to individual compounds. For example, for a given data set compound, it was determined if it would preferentially engage in the formation of activity cliffs or other landscape features. In a large-scale effort, we have determined conditional activity landscape feature probabilities for more than 160,000 compounds with well-defined activity annotations contained in 427 different target-based data sets. These landscape feature probabilities provide a detailed view of how different activity landscape features are distributed over currently available bioactive compounds.

  7. Unsupervised Pattern Classifier for Abnormality-Scaling of Vibration Features for Helicopter Gearbox Fault Diagnosis

    Science.gov (United States)

    Jammu, Vinay B.; Danai, Kourosh; Lewicki, David G.

    1996-01-01

    A new unsupervised pattern classifier is introduced for on-line detection of abnormality in features of vibration that are used for fault diagnosis of helicopter gearboxes. This classifier compares vibration features with their respective normal values and assigns them a value in (0, 1) to reflect their degree of abnormality. Therefore, the salient feature of this classifier is that it does not require feature values associated with faulty cases to identify abnormality. In order to cope with noise and changes in the operating conditions, an adaptation algorithm is incorporated that continually updates the normal values of the features. The proposed classifier is tested using experimental vibration features obtained from an OH-58A main rotor gearbox. The overall performance of this classifier is then evaluated by integrating the abnormality-scaled features for detection of faults. The fault detection results indicate that the performance of this classifier is comparable to the leading unsupervised neural networks: Kohonen's Feature Mapping and Adaptive Resonance Theory (AR72). This is significant considering that the independence of this classifier from fault-related features makes it uniquely suited to abnormality-scaling of vibration features for fault diagnosis.

  8. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  9. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  10. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry, sheet-metal parts in mass production have been widely applied in mechanical, communication, electronics, and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry, feature matching, and feature relationship. Since the extracted features include abundant geometry and engineering information, they will be effective for downstream application such as feature rebuilding and stamping process planning.

  11. Full feature data model for spatial information network integration

    Institute of Scientific and Technical Information of China (English)

    DENG Ji-qiu; BAO Guang-shu

    2006-01-01

    In allusion to the difficulty of integrating data with different models in integrating spatial information,the characteristics of raster structure, vector structure and mixed model were analyzed, and a hierarchical vectorraster integrative full feature model was put forward by integrating the advantage of vector and raster model and using the object-oriented method. The data structures of the four basic features, i.e. point, line, surface and solid,were described. An application was analyzed and described, and the characteristics of this model were described. In this model, all objects in the real world are divided into and described as features with hierarchy, and all the data are organized in vector. This model can describe data based on feature, field, network and other models, and avoid the disadvantage of inability to integrate data based on different models and perform spatial analysis on them in spatial information integration.

  12. COMBINING FEATURE SCALING ESTIMATION WITH SVM CLASSIFIER DESIGN USING GA APPROACH

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This letter adopts a GA (Genetic Algorithm) approach to assist in learning scaling of features that are most favorable to SVM (Support Vector Machines) classifier, which is named as GA-SVM. The relevant coefficients of various features to the classification task, measured by real-valued scaling, are estimated efficiently by using GA. And GA exploits heavy-bias operator to promote sparsity in the scaling of features. There are many potential benefits of this method:Feature selection is performed by eliminating irrelevant features whose scaling is zero, an SVM classifier that has enhanced generalization ability can be learned simultaneously. Experimental comparisons using original SVM and GA-SVM demonstrate both economical feature selection and excellent classification accuracy on junk e-mail recognition problem and Internet ad recognition problem. The experimental results show that comparing with original SVM classifier, the number of support vector decreases significantly and better classification results are achieved based on GA-SVM. It also demonstrates that GA can provide a simple, general, and powerful framework for tuning parameters in optimal problem, which directly improves the recognition performance and recognition rate of SVM.

  13. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  14. Regression-Based Feature Selection on Large Scale Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Hussein Mazaar

    2016-02-01

    Full Text Available In this paper, we present an approach for regression-based feature selection in human activity recognition. Due to high dimensional features in human activity recognition, the model may have over-fitting and can’t learn parameters well. Moreover, the features are redundant or irrelevant. The goal is to select important discriminating features to recognize the human activities in videos. R-Squared regression criterion can identify the best features based on the ability of a feature to explain the variations in the target class. The features are significantly reduced, nearly by 99.33%, resulting in better classification accuracy. Support Vector Machine with a linear kernel is used to classify the activities. The experiments are tested on UCF50 dataset. The results show that the proposed model significantly outperforms state-of-the-art methods.

  15. MULTI-SCALE GAUSSIAN PROCESSES MODEL

    Institute of Scientific and Technical Information of China (English)

    Zhou Yatong; Zhang Taiyi; Li Xiaohe

    2006-01-01

    A novel model named Multi-scale Gaussian Processes (MGP) is proposed. Motivated by the ideas of multi-scale representations in the wavelet theory, in the new model, a Gaussian process is represented at a scale by a linear basis that is composed of a scale function and its different translations. Finally the distribution of the targets of the given samples can be obtained at different scales. Compared with the standard Gaussian Processes (GP) model, the MGP model can control its complexity conveniently just by adjusting the scale parameter. So it can trade-off the generalization ability and the empirical risk rapidly. Experiments verify the feasibility of the MGP model, and exhibit that its performance is superior to the GP model if appropriate scales are chosen.

  16. Feature Analysis for Modeling Game Content Quality

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    ’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified...

  17. Semiautomated landscape feature extraction and modeling

    Science.gov (United States)

    Wasilewski, Anthony A.; Faust, Nickolas L.; Ribarsky, William

    2001-08-01

    We have developed a semi-automated procedure for generating correctly located 3D tree objects form overhead imagery. Cross-platform software partitions arbitrarily large, geocorrected and geolocated imagery into management sub- images. The user manually selected tree areas from one or more of these sub-images. Tree group blobs are then narrowed to lines using a special thinning algorithm which retains the topology of the blobs, and also stores the thickness of the parent blob. Maxima along these thinned tree grous are found, and used as individual tree locations within the tree group. Magnitudes of the local maxima are used to scale the radii of the tree objects. Grossly overlapping trees are culled based on a comparison of tree-tree distance to combined radii. Tree color is randomly selected based on the distribution of sample tree pixels, and height is estimated form tree radius. The final tree objects are then inserted into a terrain database which can be navigated by VGIS, a high-resolution global terrain visualization system developed at Georgia Tech.

  18. Feature Analysis for Modeling Game Content Quality

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    entertainment for individual game players is to tailor player experience in real-time via automatic game content generation. Modeling the relationship between game content and player preferences or affective states is an important step towards this type of game personalization. In this paper we...... analyse the relationship between level design parameters of platform games and player experience. We introduce a method to extract the most useful information about game content from short game sessions by investigating the size of game session that yields the highest accuracy in predicting players......’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified...

  19. Robust action recognition using multi-scale spatial-temporal concatenations of local features as natural action structures.

    Directory of Open Access Journals (Sweden)

    Xiaoyuan Zhu

    Full Text Available Human and many other animals can detect, recognize, and classify natural actions in a very short time. How this is achieved by the visual system and how to make machines understand natural actions have been the focus of neurobiological studies and computational modeling in the last several decades. A key issue is what spatial-temporal features should be encoded and what the characteristics of their occurrences are in natural actions. Current global encoding schemes depend heavily on segmenting while local encoding schemes lack descriptive power. Here, we propose natural action structures, i.e., multi-size, multi-scale, spatial-temporal concatenations of local features, as the basic features for representing natural actions. In this concept, any action is a spatial-temporal concatenation of a set of natural action structures, which convey a full range of information about natural actions. We took several steps to extract these structures. First, we sampled a large number of sequences of patches at multiple spatial-temporal scales. Second, we performed independent component analysis on the patch sequences and classified the independent components into clusters. Finally, we compiled a large set of natural action structures, with each corresponding to a unique combination of the clusters at the selected spatial-temporal scales. To classify human actions, we used a set of informative natural action structures as inputs to two widely used models. We found that the natural action structures obtained here achieved a significantly better recognition performance than low-level features and that the performance was better than or comparable to the best current models. We also found that the classification performance with natural action structures as features was slightly affected by changes of scale and artificially added noise. We concluded that the natural action structures proposed here can be used as the basic encoding units of actions and may hold

  20. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    of the face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...

  1. An adaptive multi-feature segmentation model for infrared image

    Science.gov (United States)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  2. Scene Classification of Remote Sensing Image Based on Multi-scale Feature and Deep Neural Network

    Directory of Open Access Journals (Sweden)

    XU Suhui

    2016-07-01

    Full Text Available Aiming at low precision of remote sensing image scene classification owing to small sample sizes, a new classification approach is proposed based on multi-scale deep convolutional neural network (MS-DCNN, which is composed of nonsubsampled Contourlet transform (NSCT, deep convolutional neural network (DCNN, and multiple-kernel support vector machine (MKSVM. Firstly, remote sensing image multi-scale decomposition is conducted via NSCT. Secondly, the decomposing high frequency and low frequency subbands are trained by DCNN to obtain image features in different scales. Finally, MKSVM is adopted to integrate multi-scale image features and implement remote sensing image scene classification. The experiment results in the standard image classification data sets indicate that the proposed approach obtains great classification effect due to combining the recognition superiority to different scenes of low frequency and high frequency subbands.

  3. Retinal Identification Based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform

    Directory of Open Access Journals (Sweden)

    Xiaoming Xi

    2013-07-01

    Full Text Available Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT, which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.

  4. Geometrically robust image watermarking using scale-invariant feature transform and Zernike moments

    Institute of Scientific and Technical Information of China (English)

    Leida Li; Baolong Guo; Kai Shao

    2007-01-01

    In order to resist geometric attacks, a robust image watermarking algorithm is proposed using scaleinvariant feature transform (SIFT) and Zernike moments. As SIFT features are invariant to rotation and scaling, we employ SIFT to extract feature points. Then circular patches are generated using the most robust points. An invariant watermark is generated from each circular patch based on Zernike moments.The watermark is embedded into multiple patches for resisting locally cropping attacks. Experimental results show that the proposed scheme is robust to both geometric attacks and signal processing attacks.

  5. Steady progression of osteoarthritic features in the canine groove model

    NARCIS (Netherlands)

    Marijnissen, A.C.A.; Roermund, P.M. van; Verzijl, N.; Tekoppele, J.M.; Bijlsma, J.W.J.; Lafeber, F.P.J.G.

    2002-01-01

    Objective: Recently we described a canine model of osteoarthritis (OA), the groove model with features of OA at 10 weeks after induction, identical to those seen in the canine anterior cruciate ligament transection (ACLT) model. This new model depends on cartilage damage accompanied by transient int

  6. Steady progression of osteoarthritic features in the canine groove model

    NARCIS (Netherlands)

    Marijnissen, A.C.A.; Roermund, P.M. van; Verzijl, N.; Tekoppele, J.M.; Bijlsma, J.W.J.; Lafeber, F.P.J.G.

    2002-01-01

    Objective: Recently we described a canine model of osteoarthritis (OA), the groove model with features of OA at 10 weeks after induction, identical to those seen in the canine anterior cruciate ligament transection (ACLT) model. This new model depends on cartilage damage accompanied by transient int

  7. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  8. Ht-Index for Quantifying the Fractal or Scaling Structure of Geographic Features

    CERN Document Server

    Jiang, Bin

    2013-01-01

    Although geographic features, such as mountains and coastlines, are fractal, some studies have claimed that the fractal property is not universal. This claim, which is false, is mainly attributed to the strict definition of fractal dimension as a measure or index for characterizing the complexity of fractals. In this paper, we propose an alternative, the ht-index, to quantify the fractal or scaling structure of geographic features. A geographic feature has ht-index h if the pattern of far more small things than large ones recurs (h-1) times at different scales. The higher the ht-index, the more complex the geographic feature. We conduct three case studies to illustrate how the computed ht-indices capture the complexity of different geographic features. We further discuss how the ht-index is complementary to fractal dimension, and elaborate on a dynamic view behind the ht-index that enables better understanding of geographic forms and processes. Keywords: Scaling of geographic space, fractal dimension, Richard...

  9. Taxometric Analysis of the Antisocial Features Scale of the Personality Assessment Inventory in Federal Prison Inmates

    Science.gov (United States)

    Walters, Glenn D.; Diamond, Pamela M.; Magaletta, Philip R.; Geyer, Matthew D.; Duncan, Scott A.

    2007-01-01

    The Antisocial Features (ANT) scale of the Personality Assessment Inventory (PAI) was subjected to taxometric analysis in a group of 2,135 federal prison inmates. Scores on the three ANT subscales--Antisocial Behaviors (ANT-A), Egocentricity (ANT-E), and Stimulus Seeking (ANT-S)--served as indicators in this study and were evaluated using the…

  10. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone......, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale....

  11. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  12. Modeling Suspicious Email Detection using Enhanced Feature Selection

    OpenAIRE

    2013-01-01

    The paper presents a suspicious email detection model which incorporates enhanced feature selection. In the paper we proposed the use of feature selection strategies along with classification technique for terrorists email detection. The presented model focuses on the evaluation of machine learning algorithms such as decision tree (ID3), logistic regression, Na\\"ive Bayes (NB), and Support Vector Machine (SVM) for detecting emails containing suspicious content. In the literature, various algo...

  13. Thermal Analysis of Unusual Local-scale Features on the Surface of Vesta

    Science.gov (United States)

    Tosi, F.; Capria, M. T.; DeSanctis, M. C.; Capaccioni, F.; Palomba, E.; Zambon, F.; Ammannito, E.; Blewett, D. T.; Combe, J.-Ph.; Denevi, B. W.; Li, J.-Y.; Mittlefehldt, D. W.; Palmer, E.; Sunshine, J. M.; Titus, T. N.; Raymond, C. A.; Russell, C. T.

    2013-01-01

    At 525 km in mean diameter, Vesta is the second-most massive object in the main asteroid belt of our Solar System. At all scales, pyroxene absorptions are the most prominent spectral features on Vesta and overall, Vesta mineralogy indicates a complex magmatic evolution that led to a differentiated crust and mantle [1]. The thermal behavior of areas of unusual albedo seen on the surface at the local scale can be related to physical properties that can provide information about the origin of those materials. Dawn's Visible and Infrared Mapping Spectrometer (VIR) [2] hyperspectral images are routinely used, by means of temperature-retrieval algorithms, to compute surface temperatures along with spectral emissivities. Here we present temperature maps of several local-scale features of Vesta that were observed by Dawn under different illumination conditions and different local solar times.

  14. Phenomenological features of dreams: Results from dream log studies using the Subjective Experiences Rating Scale (SERS).

    Science.gov (United States)

    Kahan, Tracey L; Claudatos, Stephanie

    2016-04-01

    Self-ratings of dream experiences were obtained from 144 college women for 788 dreams, using the Subjective Experiences Rating Scale (SERS). Consistent with past studies, dreams were characterized by a greater prevalence of vision, audition, and movement than smell, touch, or taste, by both positive and negative emotion, and by a range of cognitive processes. A Principal Components Analysis of SERS ratings revealed ten subscales: four sensory, three affective, one cognitive, and two structural (events/actions, locations). Correlations (Pearson r) among subscale means showed a stronger relationship among the process-oriented features (sensory, cognitive, affective) than between the process-oriented and content-centered (structural) features--a pattern predicted from past research (e.g., Bulkeley & Kahan, 2008). Notably, cognition and positive emotion were associated with a greater number of other phenomenal features than was negative emotion; these findings are consistent with studies of the qualitative features of waking autobiographical memory (e.g., Fredrickson, 2001).

  15. Statistical evolution of quiet-Sun small scale magnetic features using Sunrise observations

    CERN Document Server

    Anusha, L S; Hirzberger, Johann; Feller, Alex

    2016-01-01

    The evolution of small magnetic features in quiet regions of the Sun provides a unique window to probing solar magneto-convection. Here we analyze small scale magnetic features in the quiet Sun, using the high resolution, seeing-free observations from the Sunrise balloon borne solar observatory. Our aim is to understand the contribution of different physical processes, such as splitting, merging, emergence and cancellation of magnetic fields to the rearrangement, addition and removal of magnetic flux in the photosphere. We employ a statistical approach for the analysis and the evolution studies are carried out using a feature tracking technique. In this paper we provide a detailed description of the feature tracking algorithm that we have newly developed and we present the results of a statistical study of several physical quantities. The results on the fractions of the flux in the emergence, appearance, splitting, merging, disappearance and cancellation qualitatively agrees with other recent studies. To summ...

  16. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  17. Uncertainty Consideration in Watershed Scale Models

    Science.gov (United States)

    Watershed scale hydrologic and water quality models have been used with increasing frequency to devise alternative pollution control strategies. With recent reenactment of the 1972 Clean Water Act’s TMDL (total maximum daily load) component, some of the watershed scale models are being recommended ...

  18. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  19. Selecting Optimal Subset of Features for Student Performance Model

    Directory of Open Access Journals (Sweden)

    Hany M. Harb

    2012-09-01

    Full Text Available Educational data mining (EDM is a new growing research area and the essence of data mining concepts are used in the educational field for the purpose of extracting useful information on the student behavior in the learning process. Classification methods like decision trees, rule mining, and Bayesian network, can be applied on the educational data for predicting the student behavior like performance in an examination. This prediction may help in student evaluation. As the feature selection influences the predictive accuracy of any performance model, it is essential to study elaborately the effectiveness of student performance model in connection with feature selection techniques. The main objective of this work is to achieve high predictive performance by adopting various feature selection techniques to increase the predictive accuracy with least number of features. The outcomes show a reduction in computational time and constructional cost in both training and classification phases of the student performance model.

  20. Whispered speaker identification based on feature and model hybrid compensation

    Institute of Scientific and Technical Information of China (English)

    GU Xiaojiang; ZHAO Heming; Lu Gang

    2012-01-01

    In order to increase short time whispered speaker recognition rate in variable chan- nel conditions, the hybrid compensation in model and feature domains was proposed. This method is based on joint factor analysis in training model stage. It extracts speaker factor and eliminates channel factor by estimating training speech speaker and channel spaces. Then in the test stage, the test speech channel factor is projected into feature space to engage in feature compensation, so it can remove channel information both in model and feature domains in order to improve recognition rate. The experiment result shows that the hybrid compensation can obtain the similar recognition rate in the three different training channel conditions and this method is more effective than joint factor analysis in the test of short whispered speech.

  1. Multi-scale feature learning on pixels and super-pixels for seminal vesicles MRI segmentation

    Science.gov (United States)

    Gao, Qinquan; Asthana, Akshay; Tong, Tong; Rueckert, Daniel; Edwards, Philip "Eddie"

    2014-03-01

    We propose a learning-based approach to segment the seminal vesicles (SV) via random forest classifiers. The proposed discriminative approach relies on the decision forest using high-dimensional multi-scale context-aware spatial, textual and descriptor-based features at both pixel and super-pixel level. After affine transformation to a template space, the relevant high-dimensional multi-scale features are extracted and random forest classifiers are learned based on the masked region of the seminal vesicles from the most similar atlases. Using these classifiers, an intermediate probabilistic segmentation is obtained for the test images. Then, a graph-cut based refinement is applied to this intermediate probabilistic representation of each voxel to get the final segmentation. We apply this approach to segment the seminal vesicles from 30 MRI T2 training images of the prostate, which presents a particularly challenging segmentation task. The results show that the multi-scale approach and the augmentation of the pixel based features with the super-pixel based features enhances the discriminative power of the learnt classifier which leads to a better quality segmentation in some very difficult cases. The results are compared to the radiologist labeled ground truth using leave-one-out cross-validation. Overall, the Dice metric of 0:7249 and Hausdorff surface distance of 7:0803 mm are achieved for this difficult task.

  2. Spatial uncertainty model for visual features using a Kinect™ sensor.

    Science.gov (United States)

    Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong

    2012-01-01

    This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  3. Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

    Directory of Open Access Journals (Sweden)

    Jae-Han Park

    2012-06-01

    Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  4. Towards the maturity model for feature oriented domain analysis

    Directory of Open Access Journals (Sweden)

    Muhammad Javed

    2014-09-01

    Full Text Available Assessing the quality of a model has always been a challenge for researchers in academia and industry. The quality of a feature model is a prime factor because it is used in the development of products. A degraded feature model leads the development of low quality products. Few efforts have been made on improving the quality of feature models. This paper is an effort to present our ongoing work i.e. development of FODA (Feature Oriented Domain Analysis maturity model which will help to evaluate the quality of a given feature model. In this paper, we provide the quality levels along with their descriptions. The proposed model consists of four levels starting from level 0 to level 3. Design of each level is based on the severity of errors, whereas severity of errors decreases from level 0 to level 3. We elaborate each level with the help of examples. We borrowed all examples from the material published by the research community of Software Product Lines (SPL for the application of our framework.

  5. Multi-resolution representation of digital terrain models with terrain features preservation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    multi-resolution TIN model is an important issue in the contexts of visu-alization,virtual reality (VR),and geographic information systems (GIS). This paper proposes a new method for constructing multi-resolution TIN models with multi-scale topographic features preservation. The proposed method is driven by a half-edge collapse operation in a greedy framework and employs a new quadric error metric to efficiently measure geometric errors. We define topographic features in a multi-scale manner using a center-surround operator on Gaussian-weighted mean curvatures. Experimental results demonstrate that the proposed method performs better than previous methods in terms of topographic features preservation,and is able to achieve multi-resolution TIN models with a higher accuracy.

  6. Models for Small-Scale Structure on Cosmic Strings: II. Scaling and its stability

    CERN Document Server

    Vieira, J P P; Shellard, E P S

    2016-01-01

    We make use of the formalism described in a previous paper [Martins {\\it et al.} Phys. Rev. D90 (2014) 043518] to address general features of wiggly cosmic string evolution. In particular, we highlight the important role played by poorly understood energy loss mechanisms and propose a simple ansatz which tackles this problem in the context of an extended velocity-dependent one-scale model. We find a general procedure to determine all the scaling solutions admitted by a specific string model and study their stability, enabling a detailed comparison with future numerical simulations. A simpler comparison with previous Goto-Nambu simulations supports earlier evidence that scaling is easier to achieve in the matter era than in the radiation era. In addition, we also find that the requirement that a scaling regime be stable seems to notably constrain the allowed range of energy loss parameters.

  7. Scaling limits of a model for selection at two scales

    Science.gov (United States)

    Luo, Shishi; Mattingly, Jonathan C.

    2017-04-01

    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming–Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

  8. Modeling and simulation with operator scaling

    CERN Document Server

    Cohen, Serge; Rosinski, Jan

    2009-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical applications. A classification of operator stable Levy processes in two dimensions is provided according to their exponents and symmetry groups. We conclude with some remarks and extensions to general operator self-similar processes.

  9. On the crucial features of a single-file transport model for ion channels

    CERN Document Server

    Liang, Kuo Kan

    2013-01-01

    It has long been accepted that the multiple-ion single-file transport model is appropriate for many kinds of ion channels. However, most of the purely theoretical works in this field did not capture all of the important features of the realistic systems. Nowadays, large-scale atomic-level simulations are more feasible. Discrepancy between theories, simulations and experiments are getting obvious, enabling people to carefully examine the missing parts of the theoretical models and methods. In this work, it is attempted to find out the essential features that such kind of models should possess, in order that the physical properties of an ion channel be adequately reflected.

  10. Adaptability Feature's Concept, Modeling and Application in Product Design

    Institute of Scientific and Technical Information of China (English)

    BaiYuewei; ChenZhuoning; WeiShuangyu; BinHongzan

    2003-01-01

    The current 3D CAD/CAM system, both research prototypes and commercial systems, based on traditional feature modeling are always hampered by the problems in their complicated modeling and difficult maintaining. This paper introduces a new method for modeling parts by using adaptability feature (AF), by which the consistent relationship among parts and assemblies can be maintained in whole design process. In addition, the design process, can be speeded, time-to-market shortened, and product quality improved. Some essential issues of the strategy are discussed. A system, KMCAD3D, by taking advantages of AF has been developed. It is shown that the method discussed is a feasible and effective way to improve current feature modeling technology.

  11. Detecting feature interactions in Web services with model checking techniques

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    As a platform-independent software system, a Web service is designed to offer interoperability among diverse and heterogeneous applications.With the introduction of service composition in the Web service creation, various message interactions among the atomic services result in a problem resembling the feature interaction problem in the telecommunication area.This article defines the problem as feature interaction in Web services and proposes a model checking-based detection method.In the method, the Web service description is translated to the Promela language - the input language of the model checker simple promela interpreter (SPIN), and the specific properties, expressed as linear temporal logic (LTL) formulas, are formulated according to our classification of feature interaction.Then, SPIN is used to check these specific properties to detect the feature interaction in Web services.

  12. Fabrication method for small-scale structures with non-planar features

    Energy Technology Data Exchange (ETDEWEB)

    Burckel, David Bruce; Ten Eyck, Gregory A.

    2016-09-20

    The fabrication of small-scale structures is disclosed. A unit-cell of a small-scale structure with non-planar features is fabricated by forming a membrane on a suitable material. A pattern is formed in the membrane and a portion of the substrate underneath the membrane is removed to form a cavity. Resonators are then directionally deposited on the wall or sides of the cavity. The cavity may be rotated during deposition to form closed-loop resonators. The resonators may be non-planar. The unit-cells can be formed in a layer that includes an array of unit-cells.

  13. A Registration Scheme for Multispectral Systems Using Phase Correlation and Scale Invariant Feature Matching

    Directory of Open Access Journals (Sweden)

    Hanlun Li

    2016-01-01

    Full Text Available In the past few years, many multispectral systems which consist of several identical monochrome cameras equipped with different bandpass filters have been developed. However, due to the significant difference in the intensity between different band images, image registration becomes very difficult. Considering the common structural characteristic of the multispectral systems, this paper proposes an effective method for registering different band images. First we use the phase correlation method to calculate the parameters of a coarse-offset relationship between different band images. Then we use the scale invariant feature transform (SIFT to detect the feature points. For every feature point in a reference image, we can use the coarse-offset parameters to predict the location of its matching point. We only need to compare the feature point in the reference image with the several near feature points from the predicted location instead of the feature points all over the input image. Our experiments show that this method does not only avoid false matches and increase correct matches, but also solve the matching problem between an infrared band image and a visible band image in cases lacking man-made objects.

  14. The Saturnian ribbon feature: A baroclinically unstable model

    Science.gov (United States)

    Godfrey, D.

    1986-01-01

    Using measurements made by the Voyager spacecraft, an oscillatory feature in the northern midlatitudes of Saturn is examined. Measurements made by the imaging and infrared instruments are used to estimate its horizontal wavelength and vertical extent. Some of these characteristics suggest that the feature could be due to baroclinic instability. A numerical model is described of such an instability with parameters based upon the Voyager observations, and using the lower boundary condition developed by Gierasch et al for the Jovian planets.

  15. Features and New Physical Scales in Primordial Observables: Theory and Observation

    CERN Document Server

    Chluba, Jens; Patil, Subodh P.

    2015-01-01

    All cosmological observations to date are consistent with adiabatic, Gaussian and nearly scale invariant initial conditions. These findings provide strong evidence for a particular symmetry breaking pattern in the very early universe (with a close to vanishing order parameter, $\\epsilon$), widely accepted as conforming to the predictions of the simplest realizations of the inflationary paradigm. However, given that our observations are only privy to perturbations, in inferring something about the background that gave rise to them, it should be clear that many different underlying constructions project onto the same set of cosmological observables. Features in the primordial correlation functions, if present, would offer a unique and discriminating window onto the parent theory in which the mechanism that generated the initial conditions is embedded. In certain contexts, simple linear response theory allows us to infer new characteristic scales from the presence of features that can break the aforementioned de...

  16. Development and validation of scales to measure organisational features of acute hospital wards.

    Science.gov (United States)

    Adams, A; Bond, S; Arber, S

    1995-12-01

    In order to make comparisons between wards and explain variations in outcomes of nursing care, there is a growing need in nursing research for reliable and valid measures of the organisational features of acute hospital wards. This research developed The Ward Organisational Features Scales (WOFS); each set of six scales comprising 14 subscales which measure discrete dimensions of acute hospital wards. A study of a nationally representative sample of 825 nurses working in 119 acute wards in 17 hospitals, drawn from seven Regional Health Authorities in England provides evidence for the structure, reliability and validity of this comprehensive set of measures related to: the physical environment of the ward, professional nursing practice, ward leadership, professional working relationships, nurses' influence and job satisfaction. Implications for further research are discussed.

  17. Fine-scale features on the sea surface in SAR satellite imagery – Part 1: Simultaneous in-situ measurements

    Directory of Open Access Journals (Sweden)

    S. Brusch

    2012-09-01

    Full Text Available This work is aimed at identifying the origin of fine-scale features on the sea surface in synthetic aperture radar (SAR imagery with the help of in-situ measurements as well as numerical models (presented in a companion paper. We are interested in natural and artificial features starting from the horizontal scale of the upper ocean mixed layer, around 30–50 m. These features are often associated with three-dimensional upper ocean dynamics. We have conducted a number of studies involving in-situ observations in the Straits of Florida during SAR satellite overpass. The data include examples of sharp frontal interfaces, wakes of surface ships, internal wave signatures, as well as slicks of artificial and natural origin. Atmospheric processes, such as squall lines and rain cells, produced prominent signatures on the sea surface. This data has allowed us to test an approach for distinguishing between natural and artificial features and atmospheric influences in SAR images that is based on a co-polarized phase difference filter.

  18. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    of scale formation found in many industrial processes, and especially in oilfield and geothermal operations. We want to contribute to the study of this problem by releasing a simple and accurate thermodynamic model capable of calculating the behaviour of scaling minerals, covering a wide range...... of temperature and pressure. Reliable experimental solubility measurements under conditions similar to those found in reality will help the development of strong and consistent models. Chapter 1 is a short introduction to the problem of scale formation, the model chosen to study it, and the experiments performed...... the thermodynamic model used in this Ph.D. project. A review of alternative activity coefficient models an earlier work on scale formation is provided. A guideline to the parameter estimation procedure and the number of parameters estimated in the present work are also described. The prediction of solid...

  19. Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.

    Science.gov (United States)

    Youji Feng; Lixin Fan; Yihong Wu

    2016-01-01

    The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key

  20. Assessment of Borderline Personality Features in Population Samples: Is the Personality Assessment Inventory-Borderline Features Scale Measurement Invariant across Sex and Age?

    Science.gov (United States)

    De Moor, Marleen H. M.; Distel, Marijn A.; Trull, Timothy J.; Boomsma, Dorret I.

    2009-01-01

    Borderline personality disorder (BPD) is more often diagnosed in women than in men, and symptoms tend to decline with age. Using a large community sample, the authors investigated whether sex and age differences in four main features of BPD, measured with the "Personality Assessment Inventory-Borderline Features" scale (PAI-BOR; Morey,…

  1. A feature fusion based forecasting model for financial time series.

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models.

  2. A feature fusion based forecasting model for financial time series.

    Directory of Open Access Journals (Sweden)

    Zhiqiang Guo

    Full Text Available Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models.

  3. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  4. Features of Functioning the Integrated Building Thermal Model

    Directory of Open Access Journals (Sweden)

    Morozov Maxim N.

    2017-01-01

    Full Text Available A model of the building heating system, consisting of energy source, a distributed automatic control system, elements of individual heating unit and heating system is designed. Application Simulink of mathematical package Matlab is selected as a platform for the model. There are the specialized application Simscape libraries in aggregate with a wide range of Matlab mathematical tools allow to apply the “acausal” modeling concept. Implementation the “physical” representation of the object model gave improving the accuracy of the models. Principle of operation and features of the functioning of the thermal model is described. The investigations of building cooling dynamics were carried out.

  5. Unified Saliency Detection Model Using Color and Texture Features.

    Science.gov (United States)

    Zhang, Libo; Yang, Lin; Luo, Tiejian

    2016-01-01

    Saliency detection attracted attention of many researchers and had become a very active area of research. Recently, many saliency detection models have been proposed and achieved excellent performance in various fields. However, most of these models only consider low-level features. This paper proposes a novel saliency detection model using both color and texture features and incorporating higher-level priors. The SLIC superpixel algorithm is applied to form an over-segmentation of the image. Color saliency map and texture saliency map are calculated based on the region contrast method and adaptive weight. Higher-level priors including location prior and color prior are incorporated into the model to achieve a better performance and full resolution saliency map is obtained by using the up-sampling method. Experimental results on three datasets demonstrate that the proposed saliency detection model outperforms the state-of-the-art models.

  6. Representation of fluctuation features in pathological knee joint vibroarthrographic signals using kernel density modeling method.

    Science.gov (United States)

    Yang, Shanshan; Cai, Suxian; Zheng, Fang; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Zou, Quan; Chen, Jian

    2014-10-01

    This article applies advanced signal processing and computational methods to study the subtle fluctuations in knee joint vibroarthrographic (VAG) signals. Two new features are extracted to characterize the fluctuations of VAG signals. The fractal scaling index parameter is computed using the detrended fluctuation analysis algorithm to describe the fluctuations associated with intrinsic correlations in the VAG signal. The averaged envelope amplitude feature measures the difference between the upper and lower envelopes averaged over an entire VAG signal. Statistical analysis with the Kolmogorov-Smirnov test indicates that both of the fractal scaling index (p=0.0001) and averaged envelope amplitude (p=0.0001) features are significantly different between the normal and pathological signal groups. The bivariate Gaussian kernels are utilized for modeling the densities of normal and pathological signals in the two-dimensional feature space. Based on the feature densities estimated, the Bayesian decision rule makes better signal classifications than the least-squares support vector machine, with the overall classification accuracy of 88% and the area of 0.957 under the receiver operating characteristic (ROC) curve. Such VAG signal classification results are better than those reported in the state-of-the-art literature. The fluctuation features of VAG signals developed in the present study can provide useful information on the pathological conditions of degenerative knee joints. Classification results demonstrate the effectiveness of the kernel feature density modeling method for computer-aided VAG signal analysis.

  7. Goal-directed learning of features and forward models.

    Science.gov (United States)

    Saeb, Sohrab; Weber, Cornelius; Triesch, Jochen

    2009-01-01

    The brain is able to perform actions based on an adequate internal representation of the world, where task-irrelevant features are ignored and incomplete sensory data are estimated. Traditionally, it is assumed that such abstract state representations are obtained purely from the statistics of sensory input for example by unsupervised learning methods. However, more recent findings suggest an influence of the dopaminergic system, which can be modeled by a reinforcement learning approach. Standard reinforcement learning algorithms act on a single layer network connecting the state space to the action space. Here, we involve in a feature detection stage and a memory layer, which together, construct the state space for a learning agent. The memory layer consists of the state activation at the previous time step as well as the previously chosen action. We present a temporal difference based learning rule for training the weights from these additional inputs to the state layer. As a result, the performance of the network is maintained both, in the presence of task-irrelevant features, and at randomly occurring time steps during which the input is invisible. Interestingly, a goal-directed forward model emerges from the memory weights, which only covers the state-action pairs that are relevant to the task. The model presents a link between reinforcement learning, feature detection and forward models and may help to explain how reward systems recruit cortical circuits for goal-directed feature detection and prediction.

  8. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  9. Modeling Lactococcus lactis using a genome-scale flux model

    Directory of Open Access Journals (Sweden)

    Nielsen Jens

    2005-06-01

    Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.

  10. Ensemble feature selection integrating elitist roles and quantum game model

    Institute of Scientific and Technical Information of China (English)

    Weiping Ding; Jiandong Wang; Zhijin Guan; Quan Shi

    2015-01-01

    To accelerate the selection process of feature subsets in the rough set theory (RST), an ensemble elitist roles based quantum game (EERQG) algorithm is proposed for feature selec-tion. Firstly, the multilevel elitist roles based dynamics equilibrium strategy is established, and both immigration and emigration of elitists are able to be self-adaptive to balance between exploration and exploitation for feature selection. Secondly, the utility matrix of trust margins is introduced to the model of multilevel elitist roles to enhance various elitist roles’ performance of searching the optimal feature subsets, and the win-win utility solutions for feature selec-tion can be attained. Meanwhile, a novel ensemble quantum game strategy is designed as an intriguing exhibiting structure to perfect the dynamics equilibrium of multilevel elitist roles. Final y, the en-semble manner of multilevel elitist roles is employed to achieve the global minimal feature subset, which wil greatly improve the fea-sibility and effectiveness. Experiment results show the proposed EERQG algorithm has superiority compared to the existing feature selection algorithms.

  11. Modeling neuron selectivity over simple midlevel features for image classification.

    Science.gov (United States)

    Shu Kong; Zhuolin Jiang; Qiang Yang

    2015-08-01

    We now know that good mid-level features can greatly enhance the performance of image classification, but how to efficiently learn the image features is still an open question. In this paper, we present an efficient unsupervised midlevel feature learning approach (MidFea), which only involves simple operations, such as k-means clustering, convolution, pooling, vector quantization, and random projection. We show this simple feature can also achieve good performance in traditional classification task. To further boost the performance, we model the neuron selectivity (NS) principle by building an additional layer over the midlevel features prior to the classifier. The NS-layer learns category-specific neurons in a supervised manner with both bottom-up inference and top-down analysis, and thus supports fast inference for a query image. Through extensive experiments, we demonstrate that this higher level NS-layer notably improves the classification accuracy with our simple MidFea, achieving comparable performances for face recognition, gender classification, age estimation, and object categorization. In particular, our approach runs faster in inference by an order of magnitude than sparse coding-based feature learning methods. As a conclusion, we argue that not only do carefully learned features (MidFea) bring improved performance, but also a sophisticated mechanism (NS-layer) at higher level boosts the performance further.

  12. A multi-scale method for automatically extracting the dominant features of cervical vertebrae in CT images

    Directory of Open Access Journals (Sweden)

    Tung-Ying Wu

    2013-07-01

    Full Text Available Localization of the dominant points of cervical spines in medical images is important for improving the medical automation in clinical head and neck applications. In order to automatically identify the dominant points of cervical vertebrae in neck CT images with precision, we propose a method based on multi-scale contour analysis to analyzing the deformable shape of spines. To extract the spine contour, we introduce a method to automatically generate the initial contour of the spine shape, and the distance field for level set active contour iterations can also be deduced. In the shape analysis stage, we at first coarsely segment the extracted contour with zero-crossing points of the curvature based on the analysis with curvature scale space, and the spine shape is modeled with the analysis of curvature scale space. Then, each segmented curve is analyzed geometrically based on the turning angle property at different scales, and the local extreme points are extracted and verified as the dominant feature points. The vertices of the shape contour are approximately derived with the analysis at coarse scale, and then adjusted precisely at fine scale. Consequently, the results of experiment show that we approach a success rate of 93.4% and accuracy of 0.37mm by comparing with the manual results.

  13. A Feature Model Based Framework for Refactoring Software Product Line Architecture

    Institute of Scientific and Technical Information of China (English)

    Mohammad Tanhaei; Jafar Habibi∗

    2016-01-01

    Software product line (SPL) is an approach used to develop a range of software products with a high degree of similarity. In this approach, a feature model is usually used to keep track of similarities and differences. Over time, as modifications are made to the SPL, inconsistencies with the feature model could arise. The first approach to dealing with these inconsistencies is refactoring. Refactoring consists of small steps which, when accumulated, may lead to large-scale changes in the SPL, resulting in features being added to or eliminated from the SPL. In this paper, we propose a framework for refactoring SPLs, which helps keep SPLs consistent with the feature model. After some introductory remarks, we describe a formal model for representing the feature model. We express various refactoring patterns applicable to the feature model and the SPL formally, and then introduce an algorithm for finding them in the SPL. In the end, we use a real-world case study of an SPL to illustrate the applicability of the framework introduced in the paper.

  14. Solving Topological and Geometrical Constraints in Bridge Feature Model

    Institute of Scientific and Technical Information of China (English)

    PENG Weibing; SONG Liangliang; PAN Guoshuai

    2008-01-01

    The capacity that computer can solve more complex design problem was gradually increased.Bridge designs need a breakthrough in the current development limitations, and then become more intelli-gent and integrated. This paper proposes a new parametric and feature-based computer aided design (CAD) models which can represent families of bridge objects, includes knowledge representation, three-dimensional geometric topology relationships. The realization of a family member is found by solving first the geometdc constraints, and then the topological constraints. From the geometric solution, constraint equations are constructed. Topology solution is developed by feature dependencies graph between bridge objects. Finally, feature parameters are proposed to drive bridge design with feature parameters. Results from our implementation show that the method can help to facilitate bridge design.

  15. Diagnosis of cirrus cloud occurrence using large-scale analysis data and a cloud-scale model

    Directory of Open Access Journals (Sweden)

    G. Cautenet

    Full Text Available The development of cirrus clouds is governed by large-scale synoptic movements such as updraft regions in convergence zones, but also by smaller scale features, for instance microphysical phenomena, entrainment, small-scale turbulence and radiative field, fall-out of the ice phase or wind shear. For this reason, the proper handling of cirrus life cycles is not an easy task using a large-scale model alone. We present some results from a small-scale cirrus cloud model initialized by ECMWF first-guess data, which prove more convenient for this task than the analyzed ones. This model is Starr's 2-D cirrus cloud model, where the rate of ice production/destruction is parametrized from environmental data. Comparison with satellite and local observations during the ICE89 experiment (North Sea shows that such an efficient model using large-scale data as input provides a reasonable diagnosis of cirrus occurrence in a given meteorological field. The main driving features are the updraft provided by the large-scale model, which enhances or inhibits the cloud development according to its sign, and the water vapour availability. The cloud fields retrieved are compared to satellite imagery. Finally, the use of a small-scale model in large-scale numerical studies is examined.

  16. Gray-scale and color doppler US features corresponding to histological subtypes of papillary thyroid carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Kwon; Kwon, Sun Young; Woo, Seong Ku [Dongsan Medical Center, Keimyung University College of Medicine, Daegu (Korea, Republic of)

    2007-01-15

    To compare the gray-scale and color or power Doppler ultrasonographic (US) features according to the histological subtypes of a papillary thyroid carcinoma (PTC). The gray-scale and color or power Doppler US features of 159 surgically confirmed PTC (classic type of PTC, 69; classic type of papillary microcarcinoma [PMC], 67; and follicular variant of PTC [FVPTC], 23) in 118 patients were analyzed retrospectively. The following US characteristics were evaluated: the type of vascularization, echogenicity, outline, ratio of anteroposterior/transverse (AP/T) diameters, as well as the presence or absence of halo sings, cystic changes, and microcalcification. The most common type of vascularization was penetrating or central (75.4%) for the classic type of PTC, avascular (56.7%) for PMC, and peripheral and central (82.6%) for FVPTC. The echogenicity was most commonly hypoechoic (47.8%) for the classic type, hypoechoic (74.6%) for PMC, and isoechoic (30.4%) for FVPTC. The outline was most often irregular (60.9%) for the classic type, irregular (86.6%) for PMC, and regular (91.3%) for FVPTC. The ratio of the AP/T diameters was 1.0 or more in 31.9%, 55.2%, and 13.0%, a halo sign was observed in 30.4%, 6.0%, and 78.3%, cystic changes was present in 1.4%, 0%, and 21.7%, and microcalcifications were present in 55.1%, 28.4%, and 13.0% of those with the classic type, PMC and FVPTC, respectively. The gray-scale and color Doppler US features corresponding to the histological subtypes of PTC are significantly different from one another. The US features of FVPTC appear to be significantly different from the other subtypes in that they tend to have more benign US characteristics than those of the classic type or PMC.

  17. FINAL REPORT: Mechanistically-Base Field Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Brian D.

    2013-11-04

    Biogeochemical reactive transport processes in the subsurface environment are important to many contemporary environmental issues of significance to DOE. Quantification of risks and impacts associated with environmental management options, and design of remediation systems where needed, require that we have at our disposal reliable predictive tools (usually in the form of numerical simulation models). However, it is well known that even the most sophisticated reactive transport models available today have poor predictive power, particularly when applied at the field scale. Although the lack of predictive ability is associated in part with our inability to characterize the subsurface and limitations in computational power, significant advances have been made in both of these areas in recent decades and can be expected to continue. In this research, we examined the upscaling (pore to Darcy and Darcy to field) the problem of bioremediation via biofilms in porous media. The principle idea was to start with a conceptual description of the bioremediation process at the pore scale, and apply upscaling methods to formally develop the appropriate upscaled model at the so-called Darcy scale. The purpose was to determine (1) what forms the upscaled models would take, and (2) how one might parameterize such upscaled models for applications to bioremediation in the field. We were able to effectively upscale the bioremediation process to explain how the pore-scale phenomena were linked to the field scale. The end product of this research was to produce a set of upscaled models that could be used to help predict field-scale bioremediation. These models were mechanistic, in the sense that they directly incorporated pore-scale information, but upscaled so that only the essential features of the process were needed to predict the effective parameters that appear in the model. In this way, a direct link between the microscale and the field scale was made, but the upscaling process

  18. Weyl's Scale Invariance And The Standard Model

    CERN Document Server

    Gold, B S

    2005-01-01

    This paper is an extension of the work by Dr. Subhash Rajpoot, Ph.D. and Dr. Hitoshi Nishino, Ph.D. I introduce Weyl's scale invariance as an additional local symmetry in the standard model of electroweak interactions. An inevitable consequence is the introduction of general relativity coupled to scalar fields a la Dirac and an additional vector particle called the Weylon. This paper shows that once Weyl's scale invariance is broken, the phenomenon (a) generates Newton's gravitational constant GN and (b) triggers spontaneous symmetry breaking in the normal manner resulting in masses for the conventional fermions and bosons. The scale at which Weyl's sclale symmetry breaks is of order Planck mass. If right-handed neutrinos are also introduced, their absence at present energy scales is attributed to their mass which is tied to the scale where scale invariance breaks.

  19. Bayesian latent feature modeling for modeling bipartite networks with overlapping groups

    DEFF Research Database (Denmark)

    Jørgensen, Philip H.; Mørup, Morten; Schmidt, Mikkel Nørgaard;

    2016-01-01

    Bi-partite networks are commonly modelled using latent class or latent feature models. Whereas the existing latent class models admit marginalization of parameters specifying the strength of interaction between groups, existing latent feature models do not admit analytical marginalization...... of the parameters accounting for the interaction strength within the feature representation. We propose a new binary latent feature model that admits analytical marginalization of interaction strengths such that model inference reduces to assigning nodes to latent features. We propose a constraint inspired...... to the infinite relational model and the infinite Bernoulli mixture model. We find that the model provides a new latent feature representation of structure while in link-prediction performing close to existing models. Our current extension of the notion of communities and collapsed inference to binary latent...

  20. Riparian erosion vulnerability model based on environmental features.

    Science.gov (United States)

    Botero-Acosta, Alejandra; Chu, Maria L; Guzman, Jorge A; Starks, Patrick J; Moriasi, Daniel N

    2017-12-01

    Riparian erosion is one of the major causes of sediment and contaminant load to streams, degradation of riparian wildlife habitats, and land loss hazards. Land and soil management practices are implemented as conservation and restoration measures to mitigate the environmental problems brought about by riparian erosion. This, however, requires the identification of vulnerable areas to soil erosion. Because of the complex interactions between the different mechanisms that govern soil erosion and the inherent uncertainties involved in quantifying these processes, assessing erosion vulnerability at the watershed scale is challenging. The main objective of this study was to develop a methodology to identify areas along the riparian zone that are susceptible to erosion. The methodology was developed by integrating the physically-based watershed model MIKE-SHE, to simulate water movement, and a habitat suitability model, MaxEnt, to quantify the probability of presences of elevation changes (i.e., erosion) across the watershed. The presences of elevation changes were estimated based on two LiDAR-based elevation datasets taken in 2009 and 2012. The changes in elevation were grouped into four categories: low (0.5 - 0.7 m), medium (0.7 - 1.0 m), high (1.0 - 1.7 m) and very high (1.7 - 5.9 m), considering each category as a studied "species". The categories' locations were then used as "species location" map in MaxEnt. The environmental features used as constraints to the presence of erosion were land cover, soil, stream power index, overland flow, lateral inflow, and discharge. The modeling framework was evaluated in the Fort Cobb Reservoir Experimental watershed in southcentral Oklahoma. Results showed that the most vulnerable areas for erosion were located at the upper riparian zones of the Cobb and Lake sub-watersheds. The main waterways of these sub-watersheds were also found to be prone to streambank erosion. Approximatively 80% of the riparian zone (streambank

  1. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... the complexity of hand-crafting feature extractors that combine information across dissimilar modalities of input. Frequent sequence mining is presented as a method to learn feature extractors that fuse physiological and contextual information. This method is evaluated in a game-based dataset and compared...

  2. HIGH-pT Features of z-SCALING at Rhic and Tevatron

    Science.gov (United States)

    Tokarev, M. V.; Zborovsky, I.; Dedovich, T. G.

    2008-09-01

    Experimental data on inclusive cross sections of jet, direct photon, and high-pT hadron production in pp/bar pp and AA collisions are analyzed in the framework of z-scaling. The analysis is performed with data obtained at ISR, Sbar ppS, RHIC, and Tevatron. Scaling properties of z-presentation of the inclusive spectra are verified. Physical interpretation of the variable z and the scaling function ψ(z) is discussed. We argue that general principles of self-similarity, locality, and fractality reflect the structure of the colliding objects, interaction of their constituents, and particle formation at small scales. The obtained results suggest that the z-scaling may be used as a tool for searching for new physics phenomena beyond Standard Model in hadron and nucleus collisions at high transverse momentum and high multiplicity at U70, RHIC, Tevatron, and LHC.

  3. Extracting scaling laws from numerical dynamo models

    CERN Document Server

    Stelzer, Z

    2013-01-01

    Earth's magnetic field is generated by processes in the electrically conducting, liquid outer core, subsumed under the term `geodynamo'. In the last decades, great effort has been put into the numerical simulation of core dynamics following from the magnetohydrodynamic (MHD) equations. However, the numerical simulations are far from Earth's core in terms of several control parameters. Different scaling analyses found simple scaling laws for quantities like heat transport, flow velocity, magnetic field strength and magnetic dissipation time. We use an extensive dataset of 116 numerical dynamo models compiled by Christensen and co-workers to analyse these scalings from a rigorous model selection point of view. Our method of choice is leave-one-out cross-validation which rates models according to their predictive abilities. In contrast to earlier results, we find that diffusive processes are not negligible for the flow velocity and magnetic field strength in the numerical dynamos. Also the scaling of the magneti...

  4. 3D facial geometric features for constrained local model

    NARCIS (Netherlands)

    Cheng, Shiyang; Zafeiriou, Stefanos; Asthana, Akshay; Pantic, Maja

    2014-01-01

    We propose a 3D Constrained Local Model framework for deformable face alignment in depth image. Our framework exploits the intrinsic 3D geometric information in depth data by utilizing robust histogram-based 3D geometric features that are based on normal vectors. In addition, we demonstrate the fusi

  5. Observational evidence for various models of Moving Magnetic Features

    Science.gov (United States)

    Lee, Jeongwoo W.

    1992-01-01

    New measurements of Moving Magnetic Features (MMFs) based on the observations of the active region NOAA 5612 made at Big Bear Solar Observatory (BBSO) on August 2, 1989 are presented. The existing theoretical models are checked against the new observations, and the origin of MMFs conjectured from the deduced observational constraints is discussed.

  6. Automatic computational models of acoustical category features: Talking versus singing

    Science.gov (United States)

    Gerhard, David

    2003-10-01

    The automatic discrimination between acoustical categories has been an increasingly interesting problem in the fields of computer listening, multimedia databases, and music information retrieval. A system is presented which automatically generates classification models, given a set of destination classes and a set of a priori labeled acoustic events. Computational models are created using comparative probability density estimations. For the specific example presented, the destination classes are talking and singing. Individual feature models are evaluated using two measures: The Kologorov-Smirnov distance measures feature separation, and accuracy is measured using absolute and relative metrics. The system automatically segments the event set into a user-defined number (n) of development subsets, and runs a development cycle for each set, generating n separate systems, each of which is evaluated using the above metrics to improve overall system accuracy and to reduce inherent data skew from any one development subset. Multiple features for the same acoustical categories are then compared for underlying feature overlap using cross-correlation. Advantages of automated computational models include improved system development and testing, shortened development cycle, and automation of common system evaluation tasks. Numerical results are presented relating to the talking/singing classification problem.

  7. A model for explaining some features of shuttle glow

    Science.gov (United States)

    Peters, P. N.

    1985-01-01

    A solid state model is proposed which hopefully removes some of the objections to excited atoms being sources for light emanating from surfaces. Glow features are discussed in terms of excited oxygen atoms impinged on the surface, although other species could be treated similarly. Band formation, excited lifetime shortening and glow color are discussed in terms of this model. The model's inability to explain glow emanating above surfaces indicates a necessity for other mechanisms to satisfy this requirements. Several ways of testing the model are described.

  8. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  9. Digital Library ImageRetrieval usingScale Invariant Feature and Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hongtao Zhang

    2014-10-01

    Full Text Available With the advance of digital library, the digital content develops with rich information connotation. Traditional information retrieval methods based on external characteristic and text description are unable to sufficientlyreveal and express the substance and semantic relation of multimedia information, and unable to fully reveal and describe the representative characteristics of information. Because of the abundant connotation of image content and the people’s abstract subjectivity in studying image content, the visual feature of the image is difficult to be described by key words. Therefore, this method not always can meet people’s needs, and the study of digital library image retrieval technique based on content is important to both academic research and application. At present, image retrieval methods are mainly based on the text and content, etc. But these existing algorithms have shortages, such as large errors and slow speeds. Motivated by the above fact, we in this paper propose a new approach based on relevance vector machine (RVM. The proposed approach first extracts the patch-level scale invariant image feature (SIFT, and then constructs the global features for images. The image feature is then delivered into RVM for retrieval. We evaluate the proposed approach on Corel dataset. The experimental result shows that the proposed method in this text has high accuracy when retrieves images.

  10. Seamless cross-scale modeling with SCHISM

    Science.gov (United States)

    Zhang, Yinglong J.; Ye, Fei; Stanev, Emil V.; Grashorn, Sebastian

    2016-06-01

    We present a new 3D unstructured-grid model (SCHISM) which is an upgrade from an existing model (SELFE). The new advection scheme for the momentum equation includes an iterative smoother to reduce excess mass produced by higher-order kriging method, and a new viscosity formulation is shown to work robustly for generic unstructured grids and effectively filter out spurious modes without introducing excessive dissipation. A new higher-order implicit advection scheme for transport (TVD2) is proposed to effectively handle a wide range of Courant numbers as commonly found in typical cross-scale applications. The addition of quadrangular elements into the model, together with a recently proposed, highly flexible vertical grid system (Zhang et al., A new vertical coordinate system for a 3D unstructured-grid model. Ocean Model. 85, 2015), leads to model polymorphism that unifies 1D/2DH/2DV/3D cells in a single model grid. Results from several test cases demonstrate the model's good performance in the eddying regime, which presents greater challenges for unstructured-grid models and represents the last missing link for our cross-scale model. The model can thus be used to simulate cross-scale processes in a seamless fashion (i.e. from deep ocean into shallow depths).

  11. Detection of Small-Scaled Features Using Landsat and Sentinel-2 Data Sets

    Science.gov (United States)

    Steensen, Torge; Muller, Sonke; Dresen, Boris; Buscher, Olaf

    2016-08-01

    In advanced times of renewable energies, our attention has to be on secondary features that can be utilised to enhance our independence from fossil fuels. In terms of biomass, this focus lies on small-scaled features like vegetation units alongside roads or hedges between agricultural fields. Currently, there is no easily- accessible inventory, if at all, outlining the growth and re-growth patterns of such vegetation. Since they are trimmed at least annually to allow the passing of traffic, we can, theoretically, harvest the cut and convert it into energy. This, however, requires a map outlining the vegetation growth and the potential energy amount at different locations as well as adequate transport routes and potential processing plant locations. With the help of Landsat and Sentinel-2 data sets, we explore the possibilities to create such a map. Additional data is provided in the form of regularly acquired, airborne orthophotos and GIS-based infrastructure data.

  12. Real-time object tracking based on scale-invariant features employing bio-inspired hardware.

    Science.gov (United States)

    Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya

    2016-09-01

    We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video.

  13. Feature Solution in the Process of Parameterizing Port Model

    Institute of Scientific and Technical Information of China (English)

    彭禹; 郝志勇; 孙秀永; 刘东航; 付鲁华

    2004-01-01

    Aimed at attaining to an integrated and effective pattern to guide the port design process, this paper puts forward a new conception of feature solution, which is based on the parameterized feature modeling. With this solution, the overall pert pre-design process can be conducted in a virtual pattern. Moreover, to evaluate the advantages of the new design pattern, an application of port system has been involved in this paper; and in the process of application a computational fluid dynamic analysis is concerned. An ideal effect of cleanness,high efficiency and high precision has been achieved.

  14. Site-Scale Saturated Zone Flow Model

    Energy Technology Data Exchange (ETDEWEB)

    G. Zyvoloski

    2003-12-17

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca

  15. Feature-based Analysis of Large-scale Spatio-Temporal Sensor Data on Hybrid Architectures.

    Science.gov (United States)

    Saltz, Joel; Teodoro, George; Pan, Tony; Cooper, Lee; Kong, Jun; Klasky, Scott; Kurc, Tahsin

    2013-08-01

    Analysis of large sensor datasets for structural and functional features has applications in many domains, including weather and climate modeling, characterization of subsurface reservoirs, and biomedicine. The vast amount of data obtained from state-of-the-art sensors and the computational cost of analysis operations create a barrier to such analyses. In this paper, we describe middleware system support to take advantage of large clusters of hybrid CPU-GPU nodes to address the data and compute-intensive requirements of feature-based analyses in large spatio-temporal datasets.

  16. Collecting Inexpensive High Resolution Aerial and Stereo Images of Small- to Mid-Scale Geomorphic and Tectonic Features

    Science.gov (United States)

    Wheelwright, R. J.; White, W. S.; Willis, J. B.

    2010-12-01

    Methods for collecting accurate, mm- to cm-scale stereoscopic aerial imagery of both small- and mid-scale geomorphic features are developed for a one-time cost of under $1500. High resolution aerial images are valuable for documenting and analyzing small- to mid-scale geomorphic and tectonic features. However, collecting images of mid-scale features such as landslides, rock glaciers, fault scarps, and cinder cones is expensive and makes studies that rely on high resolution repeat imagery prohibitive for undergraduate geology departments with limited budgets. In addition to cost, collecting images of smaller scale geomorphic features such as gravel bars is often impeded by overhanging vegetation or other features in the immediate environment that make impractical the collection of aerial images using standard airborne techniques. The methods provide high resolution stereo photos suitable for image processing and stereographic analysis; the images are potentially suitable for change analyses, velocity tracking, and construction of lidar-resolution digital elevation models. We developed two techniques. The technique suitable for small-scale features (such as gravel bars) utilizes two Nikon D3000 digital single-lens reflex (DSLR) cameras attached to a system of poles that suspends the cameras at a height of 4 meters with a variable camera separation of 0.6 to 0.9 m. The poles are oriented such that they do not appear in the photographs. The cameras are simultaneously remotely activated to collect stereo pairs at a resolution of 64 pixels/cm2 (pixel length is 1.2 mm). Ground control on the images is provided by pegs placed 5 meters apart, GPS positioning, and a meter-stick included in each photograph. Initial photo data gathered of a gravel bar on the Henry’s Fork of the Snake River, north of Rexburg, Idaho is sharp and readily segmented using the MatLab-based CLASTS image processing algorithm. The technique developed for imaging mid-scale features (such as cinder

  17. Powerline Communications Channel Modelling Methodology Based on Statistical Features

    CERN Document Server

    Tan, Bo

    2012-01-01

    This paper proposes a new channel modelling method for powerline communications networks based on the multipath profile in the time domain. The new channel model is developed to be applied in a range of Powerline Communications (PLC) research topics such as impulse noise modelling, deployment and coverage studies, and communications theory analysis. To develop the methodology, channels are categorised according to their propagation distance and power delay profile. The statistical multipath parameters such as path arrival time, magnitude and interval for each category are analyzed to build the model. Each generated channel based on the proposed statistical model represents a different realisation of a PLC network. Simulation results in similar the time and frequency domains show that the proposed statistical modelling method, which integrates the impact of network topology presents the PLC channel features as the underlying transmission line theory model. Furthermore, two potential application scenarios are d...

  18. Large-Scale Circulation Features Typical of Wintertime Extensive and Persistent Low Temperature Events in China

    Institute of Scientific and Technical Information of China (English)

    BUEH Cholaw; FU Xian-Yue; XIE Zuo-Wei

    2011-01-01

    A pair of northeast-southwest-tilted midtropospheric ridges and troughs on the continental scale was observed to be the key circulation feature common among wintertime extensive and persistent low temperature events (EPLTE) in China. During the persistence of such anomalous circulations, the split flow over the inner Asian continent and the influent flow over the southeastern coast of China correspond well to the expanded and amplified Siberian high with tightened sea level pressure gradients and hence, a strong, cold advection over south- eastern China. The western Pacific subtropical high tends to expand northward during the early stages of most EPLTEs.

  19. Moist multi-scale models for the hurricane embryo

    Energy Technology Data Exchange (ETDEWEB)

    Majda, Andrew J. [New York University; Xing, Yulong [ORNL; Mohammadian, Majid [University of Ottawa, Canada

    2010-01-01

    Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heat sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.

  20. Diffusion through thin membranes: Modeling across scales

    Science.gov (United States)

    Aho, Vesa; Mattila, Keijo; Kühn, Thomas; Kekäläinen, Pekka; Pulkkinen, Otto; Minussi, Roberta Brondani; Vihinen-Ranta, Maija; Timonen, Jussi

    2016-04-01

    From macroscopic to microscopic scales it is demonstrated that diffusion through membranes can be modeled using specific boundary conditions across them. The membranes are here considered thin in comparison to the overall size of the system. In a macroscopic scale the membrane is introduced as a transmission boundary condition, which enables an effective modeling of systems that involve multiple scales. In a mesoscopic scale, a numerical lattice-Boltzmann scheme with a partial-bounceback condition at the membrane is proposed and analyzed. It is shown that this mesoscopic approach provides a consistent approximation of the transmission boundary condition. Furthermore, analysis of the mesoscopic scheme gives rise to an expression for the permeability of a thin membrane as a function of a mesoscopic transmission parameter. In a microscopic model, the mean waiting time for a passage of a particle through the membrane is in accordance with this permeability. Numerical results computed with the mesoscopic scheme are then compared successfully with analytical solutions derived in a macroscopic scale, and the membrane model introduced here is used to simulate diffusive transport between the cell nucleus and cytoplasm through the nuclear envelope in a realistic cell model based on fluorescence microscopy data. By comparing the simulated fluorophore transport to the experimental one, we determine the permeability of the nuclear envelope of HeLa cells to enhanced yellow fluorescent protein.

  1. Gravity Model for Topological Features on a Cylindrical Manifold

    Directory of Open Access Journals (Sweden)

    Bayak I.

    2008-04-01

    Full Text Available A model aimed at understanding quantum gravity in terms of Birkhoff's approach is discussed. The geometry of this model is constructed by using a winding map of Minkowski space into a $R^3 x S^{1}$-cylinder. The basic field of this model is a field of unit vectors defined through the velocity field of a flow wrapping the cylinder. The degeneration of some parts of the flow into circles (topological features results in inhomogeneities and gives rise to a scalar field, analogous to the gravitational field. The geometry and dynamics of this field are briefly discussed. We treat the intersections between the topological features and the observer's 3-space as matter particles and argue that these entities are likely to possess some quantum properties.

  2. Gravity Model for Topological Features on a Cylindrical Manifold

    Directory of Open Access Journals (Sweden)

    Bayak I.

    2008-04-01

    Full Text Available A model aimed at understanding quantum gravity in terms of Birkho’s approach is discussed. The geometry of this model is constructed by using a winding map of Minkowski space into a R3 S1 -cylinder. The basic field of this model is a field of unit vectors defined through the velocity field of a flow wrapping the cylinder. The degeneration of some parts of the flow into circles (topological features results in in- homogeneities and gives rise to a scalar field, analogous to the gravitational field. The geometry and dynamics of this field are briefly discussed. We treat the intersections be- tween the topological features and the observer’s 3-space as matter particles and argue that these entities are likely to possess some quantum properties.

  3. Modelling the scaling properties of human mobility

    Science.gov (United States)

    Song, Chaoming; Koren, Tal; Wang, Pu; Barabási, Albert-László

    2010-10-01

    Individual human trajectories are characterized by fat-tailed distributions of jump sizes and waiting times, suggesting the relevance of continuous-time random-walk (CTRW) models for human mobility. However, human traces are barely random. Given the importance of human mobility, from epidemic modelling to traffic prediction and urban planning, we need quantitative models that can account for the statistical characteristics of individual human trajectories. Here we use empirical data on human mobility, captured by mobile-phone traces, to show that the predictions of the CTRW models are in systematic conflict with the empirical results. We introduce two principles that govern human trajectories, allowing us to build a statistically self-consistent microscopic model for individual human mobility. The model accounts for the empirically observed scaling laws, but also allows us to analytically predict most of the pertinent scaling exponents.

  4. Sub-Grid Scale Plume Modeling

    Directory of Open Access Journals (Sweden)

    Greg Yarwood

    2011-08-01

    Full Text Available Multi-pollutant chemical transport models (CTMs are being routinely used to predict the impacts of emission controls on the concentrations and deposition of primary and secondary pollutants. While these models have a fairly comprehensive treatment of the governing atmospheric processes, they are unable to correctly represent processes that occur at very fine scales, such as the near-source transport and chemistry of emissions from elevated point sources, because of their relatively coarse horizontal resolution. Several different approaches have been used to address this limitation, such as using fine grids, adaptive grids, hybrid modeling, or an embedded sub-grid scale plume model, i.e., plume-in-grid (PinG modeling. In this paper, we first discuss the relative merits of these various approaches used to resolve sub-grid scale effects in grid models, and then focus on PinG modeling which has been very effective in addressing the problems listed above. We start with a history and review of PinG modeling from its initial applications for ozone modeling in the Urban Airshed Model (UAM in the early 1980s using a relatively simple plume model, to more sophisticated and state-of-the-science plume models, that include a full treatment of gas-phase, aerosol, and cloud chemistry, embedded in contemporary models such as CMAQ, CAMx, and WRF-Chem. We present examples of some typical results from PinG modeling for a variety of applications, discuss the implications of PinG on model predictions of source attribution, and discuss possible future developments and applications for PinG modeling.

  5. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  6. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... behavior and the trapped free energy in the material, in addition to the plastic behavior in terms of the anisotropic development of the yield surface. It is shown that a generalization of Hill’s anisotropic yield criterion can be used to model the Bauschinger effect, in addition to the pressure and size...

  7. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates...... sequential release – a feature in the new Danish interlocking systems. The generic model and safety properties can be instantiated with interlocking configuration data, resulting in a concrete model in the form of a Kripke structure, and in high-level safety properties expressed as state invariants. Using...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  8. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2014-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates...... sequential release - a feature in the new Danish interlocking systems. The generic model and safety properties can be instantiated with interlocking configuration data, resulting in a concrete model in the form of a Kripke structure, and in high-level safety properties expressed as state invariants. Using...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  9. Dorsal hand vein recognition based on Gabor multi-orientation fusion and multi-scale HOG features

    Science.gov (United States)

    Han, Tuo; Wang, Zhiyong; Yang, Xiaoping

    2016-10-01

    Kinds of factors such as illumination and hand gestures would reduce the accuracy of dorsal hand vein recognition. Aiming at single hand vein image with low contrast and simple structure, an algorithm combining Gabor multi-orientation features fusion with Multi-scale Histogram of Oriented Gradient (MS-HOG) is proposed in this paper. With this method, more features will be extracted to improve the recognition accuracy. Firstly, diagrams of multi-scale and multi-orientation are acquired using Gabor transformation, then the Gabor features of the same scale and multi-orientation will be fused, and the features of the correspondent fusion diagrams will be extracted with a HOG operator of a certain scale. Finally the multi-scale cascaded histograms will be obtained for hand vein recognition. The experimental results show that our method not only improve the recognition accuracy but has good robustness in dorsal hand vein recognition.

  10. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  11. Multiscale vascular surface model generation from medical imaging data using hierarchical features.

    Science.gov (United States)

    Bekkers, Eric J; Taylor, Charles A

    2008-03-01

    Computational fluid dynamics (CFD) modeling of blood flow from image-based patient specific models can provide useful physiologic information for guiding clinical decision making. A novel method for the generation of image-based, 3-D, multiscale vascular surface models for CFD is presented. The method generates multiscale surfaces based on either a linear triangulated or a globally smooth nonuniform rational B-spline (NURB) representation. A robust local curvature analysis is combined with a novel global feature analysis to set mesh element size. The method is particularly useful for CFD modeling of complex vascular geometries that have a wide range of vasculature size scales, in conditions where 1) initial surface mesh density is an important consideration for balancing surface accuracy with manageable size volumetric meshes, 2) adaptive mesh refinement based on flow features makes an underlying explicit smooth surface representation desirable, and 3) semi-automated detection and trimming of a large number of inlet and outlet vessels expedites model construction.

  12. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  13. Nonlinear multidimensional scaling and visualization of earthquake clusters over space, time and feature space

    Directory of Open Access Journals (Sweden)

    W. Dzwinel

    2005-01-01

    Full Text Available We present a novel technique based on a multi-resolutional clustering and nonlinear multi-dimensional scaling of earthquake patterns to investigate observed and synthetic seismic catalogs. The observed data represent seismic activities around the Japanese islands during 1997-2003. The synthetic data were generated by numerical simulations for various cases of a heterogeneous fault governed by 3-D elastic dislocation and power-law creep. At the highest resolution, we analyze the local cluster structures in the data space of seismic events for the two types of catalogs by using an agglomerative clustering algorithm. We demonstrate that small magnitude events produce local spatio-temporal patches delineating neighboring large events. Seismic events, quantized in space and time, generate the multi-dimensional feature space characterized by the earthquake parameters. Using a non-hierarchical clustering algorithm and nonlinear multi-dimensional scaling, we explore the multitudinous earthquakes by real-time 3-D visualization and inspection of the multivariate clusters. At the spatial resolutions characteristic of the earthquake parameters, all of the ongoing seismicity both before and after the largest events accumulates to a global structure consisting of a few separate clusters in the feature space. We show that by combining the results of clustering in both low and high resolution spaces, we can recognize precursory events more precisely and unravel vital information that cannot be discerned at a single resolution.

  14. Multi-scale feature extraction for learning-based classification of coronary artery stenosis

    Science.gov (United States)

    Tessmann, Matthias; Vega-Higuera, Fernando; Fritz, Dominik; Scheuering, Michael; Greiner, Günther

    2009-02-01

    Assessment of computed tomography coronary angiograms for diagnostic purposes is a mostly manual, timeconsuming task demanding a high degree of clinical experience. In order to support diagnosis, a method for reliable automatic detection of stenotic lesions in computed tomography angiograms is presented. Thereby, lesions are detected by boosting-based classification. Hence, a strong classifier is trained using the AdaBoost algorithm on annotated data. Subsequently, the resulting strong classification function is used in order to detect different types of coronary lesions in previously unseen data. As pattern recognition algorithms require a description of the objects to be classified, a novel approach for feature extraction in computed tomography angiograms is introduced. By generation of cylinder segments that approximate the vessel shape at multiple scales, feature values can be extracted that adequately describe the properties of stenotic lesions. As a result of the multi-scale approach, the algorithm is capable of dealing with the variability of stenotic lesion configuration. Evaluation of the algorithm was performed on a large database containing unseen segmented centerlines from cardiac computed tomography images. Results showed that the method was able to detect stenotic cardiovascular diseases with high sensitivity and specificity. Moreover, lesion based evaluation revealed that the majority of stenosis can be reliable identified in terms of position, type and extent.

  15. The SWISS-MODEL Repository—new features and functionality

    Science.gov (United States)

    Bienert, Stefan; Waterhouse, Andrew; de Beer, Tjaart A. P.; Tauriello, Gerardo; Studer, Gabriel; Bordoli, Lorenza; Schwede, Torsten

    2017-01-01

    SWISS-MODEL Repository (SMR) is a database of annotated 3D protein structure models generated by the automated SWISS-MODEL homology modeling pipeline. It currently holds >400 000 high quality models covering almost 20% of Swiss-Prot/UniProtKB entries. In this manuscript, we provide an update of features and functionalities which have been implemented recently. We address improvements in target coverage, model quality estimates, functional annotations and improved in-page visualization. We also introduce a new update concept which includes regular updates of an expanded set of core organism models and UniProtKB-based targets, complemented by user-driven on-demand update of individual models. With the new release of the modeling pipeline, SMR has implemented a REST-API and adopted an open licencing model for accessing model coordinates, thus enabling bulk download for groups of targets fostering re-use of models in other contexts. SMR can be accessed at https://swissmodel.expasy.org/repository. PMID:27899672

  16. Evidence on Features of a DSGE Business Cycle Model from Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2012-01-01

    textabstractThe empirical support for features of a Dynamic Stochastic General Equilibrium model with two technology shocks is valuated using Bayesian model averaging over vector autoregressions. The model features include equilibria, restrictions on long-run responses, a structural break of unknown

  17. Hidden Markov models for prediction of protein features

    DEFF Research Database (Denmark)

    Bystroff, Christopher; Krogh, Anders

    2008-01-01

    Hidden Markov Models (HMMs) are an extremely versatile statistical representation that can be used to model any set of one-dimensional discrete symbol data. HMMs can model protein sequences in many ways, depending on what features of the protein are represented by the Markov states. For protein...... structure prediction, states have been chosen to represent either homologous sequence positions, local or secondary structure types, or transmembrane locality. The resulting models can be used to predict common ancestry, secondary or local structure, or membrane topology by applying one of the two standard...... algorithms for comparing a sequence to a model. In this chapter, we review those algorithms and discuss how HMMs have been constructed and refined for the purpose of protein structure prediction....

  18. Feature Analysis and Modeling of the Network Community Structure

    Institute of Scientific and Technical Information of China (English)

    袁超; 柴毅; 魏善碧

    2012-01-01

    Community structure has an important influence on the structural and dynamic characteristics of the complex systems.So it has attracted a large number of researchers.However,due to its complexity,the mechanism of action of the community structure is still not clear to this day.In this paper,some features of the community structure have been discussed.And a constraint model of the community has been deduced.This model is effective to identify the communities.And especially,it is effective to identify the overlapping nodes between the communities.Then a community detection algorithm,which has linear time complexity,is proposed based on this constraint model,a proposed node similarity model and the Modularity Q.Through some experiments on a series of real-world and synthetic networks,the high performances of the algorithm and the constraint model have been illustrated.

  19. Combining Spatial and Telemetric Features for Learning Animal Movement Models

    CERN Document Server

    Kapicioglu, Berk; Wikelski, Martin; Broderick, Tamara

    2012-01-01

    We introduce a new graphical model for tracking radio-tagged animals and learning their movement patterns. The model provides a principled way to combine radio telemetry data with an arbitrary set of userdefined, spatial features. We describe an efficient stochastic gradient algorithm for fitting model parameters to data and demonstrate its effectiveness via asymptotic analysis and synthetic experiments. We also apply our model to real datasets, and show that it outperforms the most popular radio telemetry software package used in ecology. We conclude that integration of different data sources under a single statistical framework, coupled with appropriate parameter and state estimation procedures, produces both accurate location estimates and an interpretable statistical model of animal movement.

  20. Speckle-reducing scale-invariant feature transform match for synthetic aperture radar image registration

    Science.gov (United States)

    Wang, Xianmin; Li, Bo; Xu, Qizhi

    2016-07-01

    The anisotropic scale space (ASS) is often used to enhance the performance of a scale-invariant feature transform (SIFT) algorithm in the registration of synthetic aperture radar (SAR) images. The existing ASS-based methods usually suffer from unstable keypoints and false matches, since the anisotropic diffusion filtering has limitations in reducing the speckle noise from SAR images while building the ASS image representation. We proposed a speckle reducing SIFT match method to obtain stable keypoints and acquire precise matches for the SAR image registration. First, the keypoints are detected in a speckle reducing anisotropic scale space constructed by the speckle reducing anisotropic diffusion, so that speckle noise is greatly reduced and prominent structures of the images are preserved, consequently the stable keypoints can be derived. Next, the probabilistic relaxation labeling approach is employed to establish the matches of the keypoints then the correct match rate of the keypoints is significantly increased. Experiments conducted on simulated speckled images and real SAR images demonstrate the effectiveness of the proposed method.

  1. The Mini-IPIP Scale: psychometric features and relations with PTSD symptoms of Chinese earthquake survivors.

    Science.gov (United States)

    Li, Zhongquan; Sang, Zhiqin; Wang, Li; Shi, Zhanbiao

    2012-10-01

    The present purpose was to validate the Mini-IPIP scale, a short measure of the five-factor model personality traits, with a sample of Chinese earthquake survivors. A total of 1,563 participants, ages 16 to 85 years, completed the Mini-IPIP scale and a measure of posttraumatic stress disorder (PTSD) symptoms. Confirmatory factor analysis supported the five-factor structure of the Mini-IPIP with adequate values of various fit indices. This scale also showed values of internal consistency, Cronbach's alphas ranged from .79 to .84, and McDonald's omega ranged from .73 to .82 for scores on each subscale. Moreover, the five personality traits measured by the Mini-IPIP and those assessed by other big five measures had comparable patterns of relations with PTSD symptoms. Findings indicated that the Mini-IPIP is an adequate short-form of the Big-Five factors of personality, which is applicable with natural disaster survivors.

  2. New Analysis Method Application in Metallographic Images through the Construction of Mosaics Via Speeded Up Robust Features and Scale Invariant Feature Transform

    Directory of Open Access Journals (Sweden)

    Pedro Pedrosa Rebouças Filho

    2015-06-01

    Full Text Available In many applications in metallography and analysis, many regions need to be considered and not only the current region. In cases where there are analyses with multiple images, the specialist should also evaluate neighboring areas. For example, in metallurgy, welding technology is derived from conventional testing and metallographic analysis. In welding, these tests allow us to know the features of the metal, especially in the Heat-Affected Zone (HAZ; the region most likely for natural metallurgical problems to occur in welding. The expanse of the Heat-Affected Zone exceeds the size of the area observed through a microscope and typically requires multiple images to be mounted on a larger picture surface to allow for the study of the entire heat affected zone. This image stitching process is performed manually and is subject to all the inherent flaws of the human being due to results of fatigue and distraction. The analyzing of grain growth is also necessary in the examination of multiple regions, although not necessarily neighboring regions, but this analysis would be a useful tool to aid a specialist. In areas such as microscopic metallography, which study metallurgical products with the aid of a microscope, the assembly of mosaics is done manually, which consumes a lot of time and is also subject to failures due to human limitations. The mosaic technique is used in the construct of environment or scenes with corresponding characteristics between themselves. Through several small images, and with corresponding characteristics between themselves, a new model is generated in a larger size. This article proposes the use of Digital Image Processing for the automatization of the construction of these mosaics in metallographic images. The use of this proposed method is meant to significantly reduce the time required to build the mosaic and reduce the possibility of failures in assembling the final image; therefore increasing efficiency in obtaining

  3. Wavelet Correlation Feature Scale Entropy and Fuzzy Support Vector Machine Approach for Aeroengine Whole-Body Vibration Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Cheng-Wei Fei

    2013-01-01

    Full Text Available In order to correctly analyze aeroengine whole-body vibration signals, Wavelet Correlation Feature Scale Entropy (WCFSE and Fuzzy Support Vector Machine (FSVM (WCFSE-FSVM method was proposed by fusing the advantages of the WCFSE method and the FSVM method. The wavelet coefficients were known to be located in high Signal-to-Noise Ratio (S/N or SNR scales and were obtained by the Wavelet Transform Correlation Filter Method (WTCFM. This method was applied to address the whole-body vibration signals. The WCFSE method was derived from the integration of the information entropy theory and WTCFM, and was applied to extract the WCFSE values of the vibration signals. Among the WCFSE values, the WFSE1 and WCFSE2 values on the scale 1 and 2 from the high band of vibration signal were believed to acceptably reflect the vibration feature and were selected to construct the eigenvectors of vibration signals as fault samples to establish the WCFSE-FSVM model. This model was applied to aeroengine whole-body vibration fault diagnosis. Through the diagnoses of four vibration fault modes and the comparison of the analysis results by four methods (SVM, FSVM, WESE-SVM, WCFSE-FSVM, it is shown that the WCFSE-FSVM method is characterized by higher learning ability, higher generalization ability and higher anti-noise ability than other methods in aeroengine whole-vibration fault analysis. Meanwhile, this present study provides a useful insight for the vibration fault diagnosis of complex machinery besides an aeroengine.

  4. A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property

    Energy Technology Data Exchange (ETDEWEB)

    Tsap, L V; Zhang, Y; Goldgof, D B; Sarkar, S

    2004-04-02

    A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars is quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.

  5. Efficient and Robust Feature Model for Visual Tracking

    Institute of Scientific and Technical Information of China (English)

    WANG Lu; ZHUO Qing; WANG Wenyuan

    2009-01-01

    Long duration visual tracking of targets is quite challenging for computer vision, because the envi-ronments may be cluttered and distracting. Illumination variations and partial occlusions are two main diffi-culties in real world visual tracking. Existing methods based on hostile appearance information cannot solve these problems effectively. This paper proposes a feature-based dynamic tracking approach that can track objects with partial occlusions and varying illumination. The method represents the tracked object by an in-variant feature model. During the tracking, a new pyramid matching algorithm was used to match the object template with the observations to determine the observation likelihood. This matching is quite efficient in calculation and the spatial constraints among these features are also embedded. Instead of complicated op-timization methods, the whole model is incorporated into a Bayesian filtering framework. The experiments on real world sequences demonstrate that the method can track objects accurately and robustly even with illu-mination variations and partial occlusions.

  6. Full-field feature profile models in process control

    Science.gov (United States)

    Zavecz, Terrence E.

    2005-05-01

    Most process window analysis applications are capable of deriving the functional focus-dose workspace available to any set of device specifications. Previous work in this area has concentrated on calculating the superpositioned optimum operating points of various combinations of feature orientations or feature types. These studies invariably result in an average performance calculation that is biased by the impact of the substrate, reticle and exposure tool contributed perturbations. Many SEM's and optical metrology tools now provide full-feature profile information for multiple points in the exposure field. The inclusion of field spatial information into the process window analysis results in a calculation of greater accuracy and process understanding because now the capabilities of each exposure tool can be individually modeled and optimized. Such an analysis provides the added benefit that after the exposure tool is characterized, it's process perturbations can be removed from the analysis to provide greater understanding of the true process performance. Process window variables are shown to vary significantly across the exposure field of the scanner. Evaluating the depth-of-focus and optimum focus-dose at each point in the exposure field yields additional information on the imaging response of the reticle and scan-linearity of the exposure tool's reticle stage. The optimal focus response of the reticle is then removed from a full wafer exposure and the results are modeled to obtain a true process response and performance.

  7. Spermatozoa scattering by a microchannel feature: an elastohydrodynamic model

    CERN Document Server

    Montenegro-Johnson, Thomas; Smith, David J

    2014-01-01

    Sperm traverse their microenvironment through viscous fluid by propagating flagellar waves; the waveform emerges as a consequence of elastic structure, internal active moments, and low Reynolds number fluid dynamics. Engineered microchannels have recently been proposed as a method of sorting and manipulating motile cells; the interaction of cells with these artificial environments therefore warrants investigation. A numerical method is presented for the geometrically nonlinear elastohydrodynamic interaction of active swimmers with domain features. This method is employed to examine hydrodynamic scattering by a model microchannel backstep feature. Scattering is shown to depend on backstep height and the relative strength of viscous and elastic forces in the flagellum. In a 'high viscosity' parameter regime corresponding to human sperm in cervical mucus analogue, this hydrodynamic contribution to scattering is comparable in magnitude to recent data on contact effects, being of the order of 5-10 degrees. Scatter...

  8. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  9. Modeling photoacoustic spectral features of micron-sized particles.

    Science.gov (United States)

    Strohm, Eric M; Gorelikov, Ivan; Matsuura, Naomi; Kolios, Michael C

    2014-10-07

    The photoacoustic signal generated from particles when irradiated by light is determined by attributes of the particle such as the size, speed of sound, morphology and the optical absorption coefficient. Unique features such as periodically varying minima and maxima are observed throughout the photoacoustic signal power spectrum, where the periodicity depends on these physical attributes. The frequency content of the photoacoustic signals can be used to obtain the physical attributes of unknown particles by comparison to analytical solutions of homogeneous symmetric geometric structures, such as spheres. However, analytical solutions do not exist for irregularly shaped particles, inhomogeneous particles or particles near structures. A finite element model (FEM) was used to simulate photoacoustic wave propagation from four different particle configurations: a homogeneous particle suspended in water, a homogeneous particle on a reflecting boundary, an inhomogeneous particle with an absorbing shell and non-absorbing core, and an irregularly shaped particle such as a red blood cell. Biocompatible perfluorocarbon droplets, 3-5 μm in diameter containing optically absorbing nanoparticles were used as the representative ideal particles, as they are spherical, homogeneous, optically translucent, and have known physical properties. The photoacoustic spectrum of micron-sized single droplets in suspension and on a reflecting boundary were measured over the frequency range of 100-500 MHz and compared directly to analytical models and the FEM. Good agreement between the analytical model, FEM and measured values were observed for a droplet in suspension, where the spectral minima agreed to within a 3.3 MHz standard deviation. For a droplet on a reflecting boundary, spectral features were correctly reproduced using the FEM but not the analytical model. The photoacoustic spectra from other common particle configurations such as particle with an absorbing shell and a

  10. Modeling photoacoustic spectral features of micron-sized particles

    Science.gov (United States)

    Strohm, Eric M.; Gorelikov, Ivan; Matsuura, Naomi; Kolios, Michael C.

    2014-10-01

    The photoacoustic signal generated from particles when irradiated by light is determined by attributes of the particle such as the size, speed of sound, morphology and the optical absorption coefficient. Unique features such as periodically varying minima and maxima are observed throughout the photoacoustic signal power spectrum, where the periodicity depends on these physical attributes. The frequency content of the photoacoustic signals can be used to obtain the physical attributes of unknown particles by comparison to analytical solutions of homogeneous symmetric geometric structures, such as spheres. However, analytical solutions do not exist for irregularly shaped particles, inhomogeneous particles or particles near structures. A finite element model (FEM) was used to simulate photoacoustic wave propagation from four different particle configurations: a homogeneous particle suspended in water, a homogeneous particle on a reflecting boundary, an inhomogeneous particle with an absorbing shell and non-absorbing core, and an irregularly shaped particle such as a red blood cell. Biocompatible perfluorocarbon droplets, 3-5 μm in diameter containing optically absorbing nanoparticles were used as the representative ideal particles, as they are spherical, homogeneous, optically translucent, and have known physical properties. The photoacoustic spectrum of micron-sized single droplets in suspension and on a reflecting boundary were measured over the frequency range of 100-500 MHz and compared directly to analytical models and the FEM. Good agreement between the analytical model, FEM and measured values were observed for a droplet in suspension, where the spectral minima agreed to within a 3.3 MHz standard deviation. For a droplet on a reflecting boundary, spectral features were correctly reproduced using the FEM but not the analytical model. The photoacoustic spectra from other common particle configurations such as particle with an absorbing shell and a

  11. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  12. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  13. Automated Image Retrieval of Chest CT Images Based on Local Grey Scale Invariant Features.

    Science.gov (United States)

    Arrais Porto, Marcelo; Cordeiro d'Ornellas, Marcos

    2015-01-01

    Textual-based tools are regularly employed to retrieve medical images for reading and interpretation using current retrieval Picture Archiving and Communication Systems (PACS) but pose some drawbacks. All-purpose content-based image retrieval (CBIR) systems are limited when dealing with medical images and do not fit well into PACS workflow and clinical practice. This paper presents an automated image retrieval approach for chest CT images based local grey scale invariant features from a local database. Performance was measured in terms of precision and recall, average retrieval precision (ARP), and average retrieval rate (ARR). Preliminary results have shown the effectiveness of the proposed approach. The prototype is also a useful tool for radiology research and education, providing valuable information to the medical and broader healthcare community.

  14. Geologically recent small-scale surface features in Meridiani Planum and Gale Crater, Mars

    Science.gov (United States)

    Horne, David

    2014-05-01

    Enigmatic small scale (run-off may occur occasionally under present conditions in low, near-equatorial latitudes on Mars; short-lived (even for just a few minutes) meltwater emission and flow at the surface could erode gutters before evaporating. The decomposition of buried pockets of methane clathrates, which theoretical considerations suggest might be present and stable even in equatorial regions, could give rise to both methane venting (leveed fissures) and transient surface water (gutters). Another possibility is the decomposition, due to local changes in thermal conditions, of hydrated magnesium sulphates in the bedrock, releasing liquid water. Whatever their explanation, these features hint at previously unrecognized, young martian surface processes which may even be active at the present day; in this context, the apparent downslope extension of a discrete dark dust streak on Burns Cliff (inside Endurance Crater), during Opportunity's approach to that locality, is particularly thought-provoking.

  15. Modeling DNA beacons at the mesoscopic scale

    CERN Document Server

    Errami, Jalal; Theodorakopoulos, Nikos

    2007-01-01

    We report model calculations on DNA single strands which describe the equilibrium dynamics and kinetics of hairpin formation and melting. Modeling is at the level of single bases. Strand rigidity is described in terms of simple polymer models; alternative calculations performed using the freely rotating chain and the discrete Kratky-Porod models are reported. Stem formation is modeled according to the Peyrard-Bishop-Dauxois Hamiltonian. The kinetics of opening and closing is described in terms of a diffusion-controlled motion in an effective free energy landscape. Melting profiles, dependence of melting temperature on loop length, and kinetic time scales are in semiquantitative agreement with experimental data obtained from fluorescent DNA beacons forming poly(T) loops. Variation in strand rigidity is not sufficient to account for the large activation enthalpy of closing and the strong loop length dependence observed in hairpins forming poly(A) loops. Implications for modeling single strands of DNA or RNA are...

  16. Spatiotemporal exploratory models for broad-scale survey data.

    Science.gov (United States)

    Fink, Daniel; Hochachka, Wesley M; Zuckerberg, Benjamin; Winkler, David W; Shaby, Ben; Munson, M Arthur; Hooker, Giles; Riedewald, Mirek; Sheldon, Daniel; Kelling, Steve

    2010-12-01

    The distributions of animal populations change and evolve through time. Migratory species exploit different habitats at different times of the year. Biotic and abiotic features that determine where a species lives vary due to natural and anthropogenic factors. This spatiotemporal variation needs to be accounted for in any modeling of species' distributions. In this paper we introduce a semiparametric model that provides a flexible framework for analyzing dynamic patterns of species occurrence and abundance from broad-scale survey data. The spatiotemporal exploratory model (STEM) adds essential spatiotemporal structure to existing techniques for developing species distribution models through a simple parametric structure without requiring a detailed understanding of the underlying dynamic processes. STEMs use a multi-scale strategy to differentiate between local and global-scale spatiotemporal structure. A user-specified species distribution model accounts for spatial and temporal patterning at the local level. These local patterns are then allowed to "scale up" via ensemble averaging to larger scales. This makes STEMs especially well suited for exploring distributional dynamics arising from a variety of processes. Using data from eBird, an online citizen science bird-monitoring project, we demonstrate that monthly changes in distribution of a migratory species, the Tree Swallow (Tachycineta bicolor), can be more accurately described with a STEM than a conventional bagged decision tree model in which spatiotemporal structure has not been imposed. We also demonstrate that there is no loss of model predictive power when a STEM is used to describe a spatiotemporal distribution with very little spatiotemporal variation; the distribution of a nonmigratory species, the Northern Cardinal (Cardinalis cardinalis).

  17. Fine-scale features on bioreplicated decoys of the emerald ash borer provide necessary visual verisimilitude

    Science.gov (United States)

    Domingue, Michael J.; Pulsifer, Drew P.; Narkhede, Mahesh S.; Engel, Leland G.; Martín-Palma, Raúl J.; Kumar, Jayant; Baker, Thomas C.; Lakhtakia, Akhlesh

    2014-03-01

    The emerald ash borer (EAB), Agrilus planipennis, is an invasive tree-killing pest in North America. Like other buprestid beetles, it has an iridescent coloring, produced by a periodically layered cuticle whose reflectance peaks at 540 nm wavelength. The males perform a visually mediated ritualistic mating flight directly onto females poised on sunlit leaves. We attempted to evoke this behavior using artificial visual decoys of three types. To fabricate decoys of the first type, a polymer sheet coated with a Bragg-stack reflector was loosely stamped by a bioreplicating die. For decoys of the second type, a polymer sheet coated with a Bragg-stack reflector was heavily stamped by the same die and then painted green. Every decoy of these two types had an underlying black absorber layer. Decoys of the third type were produced by a rapid prototyping machine and painted green. Fine-scale features were absent on the third type. Experiments were performed in an American ash forest infested with EAB, and a European oak forest home to a similar pest, the two-spotted oak borer (TSOB), Agrilus biguttatus. When pinned to leaves, dead EAB females, dead TSOB females, and bioreplicated decoys of both types often evoked the complete ritualized flight behavior. Males also initiated approaches to the rapidly prototyped decoy, but would divert elsewhere without making contact. The attraction of the bioreplicated decoys was also demonstrated by providing a high dc voltage across the decoys that stunned and killed approaching beetles. Thus, true bioreplication with fine-scale features is necessary to fully evoke ritualized visual responses in insects, and provides an opportunity for developing insecttrapping technologies.

  18. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  19. Probabilistic, meso-scale flood loss modelling

    Science.gov (United States)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  20. A large-scale dataset of solar event reports from automated feature recognition modules

    Science.gov (United States)

    Schuh, Michael A.; Angryk, Rafal A.; Martens, Petrus C.

    2016-05-01

    The massive repository of images of the Sun captured by the Solar Dynamics Observatory (SDO) mission has ushered in the era of Big Data for Solar Physics. In this work, we investigate the entire public collection of events reported to the Heliophysics Event Knowledgebase (HEK) from automated solar feature recognition modules operated by the SDO Feature Finding Team (FFT). With the SDO mission recently surpassing five years of operations, and over 280,000 event reports for seven types of solar phenomena, we present the broadest and most comprehensive large-scale dataset of the SDO FFT modules to date. We also present numerous statistics on these modules, providing valuable contextual information for better understanding and validating of the individual event reports and the entire dataset as a whole. After extensive data cleaning through exploratory data analysis, we highlight several opportunities for knowledge discovery from data (KDD). Through these important prerequisite analyses presented here, the results of KDD from Solar Big Data will be overall more reliable and better understood. As the SDO mission remains operational over the coming years, these datasets will continue to grow in size and value. Future versions of this dataset will be analyzed in the general framework established in this work and maintained publicly online for easy access by the community.

  1. A large-scale dataset of solar event reports from automated feature recognition modules

    Directory of Open Access Journals (Sweden)

    Schuh Michael A.

    2016-01-01

    Full Text Available The massive repository of images of the Sun captured by the Solar Dynamics Observatory (SDO mission has ushered in the era of Big Data for Solar Physics. In this work, we investigate the entire public collection of events reported to the Heliophysics Event Knowledgebase (HEK from automated solar feature recognition modules operated by the SDO Feature Finding Team (FFT. With the SDO mission recently surpassing five years of operations, and over 280,000 event reports for seven types of solar phenomena, we present the broadest and most comprehensive large-scale dataset of the SDO FFT modules to date. We also present numerous statistics on these modules, providing valuable contextual information for better understanding and validating of the individual event reports and the entire dataset as a whole. After extensive data cleaning through exploratory data analysis, we highlight several opportunities for knowledge discovery from data (KDD. Through these important prerequisite analyses presented here, the results of KDD from Solar Big Data will be overall more reliable and better understood. As the SDO mission remains operational over the coming years, these datasets will continue to grow in size and value. Future versions of this dataset will be analyzed in the general framework established in this work and maintained publicly online for easy access by the community.

  2. Seasonal Transition Features of Large-Scale Moisture Transport in the Asian-Australian Monsoon Region

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Using NCEP/NCAR reanalysis data for the period of 1957-2001, the climatological seasonal transition features of large-scale vertically integrated moisture transport (VIMT) in the Asian-Australian monsoon region are investigated in this paper. The basic features of the seasonal transition of VIMT from winter to summer are the establishment of the summertime "great moisture river" pattern (named the GMR pattern)and its eastward expansion, associated with a series of climatological events which occurred in some "key periods", which include the occurrence of the notable southerly VIMT over the Indochina Peninsula in mid March, the activity of the low VIMT vortex around Sri Lanka in late April, and the onset of the South China Sea summer monsoon in mid May, among others. However, during the transition from summer to winter, the characteristics are mainly exhibited by the establishment of the easterly VIMT belt located in the tropical area, accompanied by some events occurring in "key periods". Further analyses disclose a great difference between the Indian and East Asian monsoon regions when viewed from the meridional migration of the westerly VIMT during the seasonal change process, according to which the Asian monsoon region can be easily divided into two parts along the western side of the Indochina Peninsula and it may also denote different formation mechanisms between the two regions.

  3. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  4. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  5. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    of scale formation found in many industrial processes, and especially in oilfield and geothermal operations. We want to contribute to the study of this problem by releasing a simple and accurate thermodynamic model capable of calculating the behaviour of scaling minerals, covering a wide range......-liquid equilibrium of sulphate scaling minerals (SrSO4, BaSO4, CaSO4 and CaSO4•2H2O) at temperatures up to 300ºC and pressures up to 1000 bar is described in chapter 4. Results for the binary systems (M2+, )-H2O; the ternary systems (Na+, M2+, )-H2O, and (Na+, M2+, Cl-)-H2O; and the quaternary systems (Na+, M2+)(Cl...

  6. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  7. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  8. Fast method for reactor and feature scale coupling in ALD and CVD

    Energy Technology Data Exchange (ETDEWEB)

    Yanguas-Gil, Angel; Elam, Jeffrey W.

    2017-08-08

    Transport and surface chemistry of certain deposition techniques is modeled. Methods provide a model of the transport inside nanostructures as a single-particle discrete Markov chain process. This approach decouples the complexity of the surface chemistry from the transport model, thus allowing its application under general surface chemistry conditions, including atomic layer deposition (ALD) and chemical vapor deposition (CVD). Methods provide for determination of determine statistical information of the trajectory of individual molecules, such as the average interaction time or the number of wall collisions for molecules entering the nanostructures as well as to track the relative contributions to thin-film growth of different independent reaction pathways at each point of the feature.

  9. Constraints on the lithospheric structure of Venus from mechanical models and tectonic surface features

    Science.gov (United States)

    Zuber, Maria T.

    1987-01-01

    The evidence for the extensional or compressional origins of some prominent Venusian surface features disclosed by radar images is discussed. Using simple models, the hypothesis that the observed length scales (10-20 km and 100-300 km) of deformations are controlled by dominant wavelengths arising from unstable compression or extension of the Venus lithosphere is tested. The results show that the existence of tectonic features that exhibit both length scales can be explained if, at the time of deformation, the lithosphere consisted of a crust that was relatively strong near the surface and weak at its base, and an upper mantle that was stronger than or nearly comparable in strength to the upper crust.

  10. Video Watermarking Based on 3D Scale Invariant Spatio-Temporal Feature Points

    Directory of Open Access Journals (Sweden)

    Hongbo BI

    2014-01-01

    Full Text Available 3D SIFP (3 dimensional scale invariant features points can embody the great changes in video both in spatial and temporal domain, which can assure the stability against spatial and temporal attacks. As a result, they enforce the robustness of video watermarking. In this paper, a novel video watermarking scheme exploiting 3D SIFP in DCT (Discrete Cosine Transform domain is proposed. We establish the 3D difference of Gaussian pyramid (3D DoGP and 3D Hessian matrix to locate the 3D SIFP, which are selected by global and local steps. All global SIFP are ordered in ascending order according to their 3D Hessian matrix response values, and SIFP corresponding to the first several global response values are selected to maintain the stability. Afterwards, the SIFP with the largest local response value in detected frame are selected as the center to generate the square embedding region. The region is transformed into DCT domain, and the ZigZag scanned mid-frequency coefficients are segmented to embed the watermark by using modified odd-even quantization. Experimental results show that the proposed scheme guarantees high peak signal to noise ratio (PSNR, and is very robust against noising, filtering, JPEG compression, frame swapping, frame insert, frame dropping, scaling etc.

  11. Mental Imagery Scale: a new measurement tool to assess structural features of mental representations

    Science.gov (United States)

    D'Ercole, Martina; Castelli, Paolo; Giannini, Anna Maria; Sbrilli, Antonella

    2010-05-01

    Mental imagery is a quasi-perceptual experience which resembles perceptual experience, but occurring without (appropriate) external stimuli. It is a form of mental representation and is often considered centrally involved in visuo-spatial reasoning and inventive and creative thought. Although imagery ability is assumed to be functionally independent of verbal systems, it is still considered to interact with verbal representations, enabling objects to be named and names to evoke images. In literature, most measurement tools for evaluating imagery capacity are self-report instruments focusing on differences in individuals. In the present work, we applied a Mental Imagery Scale (MIS) to mental images derived from verbal descriptions in order to assess the structural features of such mental representations. This is a key theme for those disciplines which need to turn objects and representations into words and vice versa, such as art or architectural didactics. To this aim, an MIS questionnaire was administered to 262 participants. The questionnaire, originally consisting of a 33-item 5-step Likert scale, was reduced to 28 items covering six areas: (1) Image Formation Speed, (2) Permanence/Stability, (3) Dimensions, (4) Level of Detail/Grain, (5) Distance and (6) Depth of Field or Perspective. Factor analysis confirmed our six-factor hypothesis underlying the 28 items.

  12. Feature and Statistical Model Development in Structural Health Monitoring

    Science.gov (United States)

    Kim, Inho

    All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development. The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally. Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering. The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network

  13. Large-scale identification of human protein function using topological features of interaction network

    Science.gov (United States)

    Li, Zhanchao; Liu, Zhiqing; Zhong, Wenqian; Huang, Menghua; Wu, Na; Xie, Yun; Dai, Zong; Zou, Xiaoyong

    2016-11-01

    The annotation of protein function is a vital step to elucidate the essence of life at a molecular level, and it is also meritorious in biomedical and pharmaceutical industry. Developments of sequencing technology result in constant expansion of the gap between the number of the known sequences and their functions. Therefore, it is indispensable to develop a computational method for the annotation of protein function. Herein, a novel method is proposed to identify protein function based on the weighted human protein-protein interaction network and graph theory. The network topology features with local and global information are presented to characterise proteins. The minimum redundancy maximum relevance algorithm is used to select 227 optimized feature subsets and support vector machine technique is utilized to build the prediction models. The performance of current method is assessed through 10-fold cross-validation test, and the range of accuracies is from 67.63% to 100%. Comparing with other annotation methods, the proposed way possesses a 50% improvement in the predictive accuracy. Generally, such network topology features provide insights into the relationship between protein functions and network architectures. The source code of Matlab is freely available on request from the authors.

  14. Feature and Meta-Models in Clafer: Mixed, Specialized, and Coupled

    DEFF Research Database (Denmark)

    Bąk, Kacper; Czarnecki, Krzysztof; Wasowski, Andrzej

    2011-01-01

    We present Clafer, a meta-modeling language with first-class support for feature modeling. We designed Clafer as a concise notation for meta-models, feature models, mixtures of meta- and feature models (such as components with options), and models that couple feature models and meta-models via co...... models concisely and show that Clafer meets its design objectives using a sample product line. We evaluated Clafer and how it lends itself to analysis on sample feature models, meta-models, and model templates of an E-Commerce platform....

  15. Determination of the interaction parameter and topological scaling features of symmetric star polymers in dilute solution

    KAUST Repository

    Rai, Durgesh K.

    2015-07-15

    Star polymers provide model architectures to understand the dynamic and rheological effects of chain confinement for a range of complex topological structures like branched polymers, colloids, and micelles. It is important to describe the structure of such macromolecular topologies using small-angle neutron and x-ray scattering to facilitate understanding of their structure-property relationships. Modeling of scattering from linear, Gaussian polymers, such as in the melt, has applied the random phase approximation using the Debye polymer scattering function. The Flory-Huggins interaction parameter can be obtained using neutron scattering by this method. Gaussian scaling no longer applies for more complicated chain topologies or when chains are in good solvents. For symmetric star polymers, chain scaling can differ from ν=0.5(df=2) due to excluded volume, steric interaction between arms, and enhanced density due to branching. Further, correlation between arms in a symmetric star leads to an interference term in the scattering function first described by Benoit for Gaussian chains. In this work, a scattering function is derived which accounts for interarm correlations in symmetric star polymers as well as the polymer-solvent interaction parameter for chains of arbitrary scaling dimension using a hybrid Unified scattering function. The approach is demonstrated for linear, four-arm and eight-arm polyisoprene stars in deuterated p-xylene.

  16. Determination of the interaction parameter and topological scaling features of symmetric star polymers in dilute solution.

    Science.gov (United States)

    Rai, Durgesh K; Beaucage, Gregory; Ratkanthwar, Kedar; Beaucage, Peter; Ramachandran, Ramnath; Hadjichristidis, Nikos

    2015-07-01

    Star polymers provide model architectures to understand the dynamic and rheological effects of chain confinement for a range of complex topological structures like branched polymers, colloids, and micelles. It is important to describe the structure of such macromolecular topologies using small-angle neutron and x-ray scattering to facilitate understanding of their structure-property relationships. Modeling of scattering from linear, Gaussian polymers, such as in the melt, has applied the random phase approximation using the Debye polymer scattering function. The Flory-Huggins interaction parameter can be obtained using neutron scattering by this method. Gaussian scaling no longer applies for more complicated chain topologies or when chains are in good solvents. For symmetric star polymers, chain scaling can differ from ν=0.5(d(f)=2) due to excluded volume, steric interaction between arms, and enhanced density due to branching. Further, correlation between arms in a symmetric star leads to an interference term in the scattering function first described by Benoit for Gaussian chains. In this work, a scattering function is derived which accounts for interarm correlations in symmetric star polymers as well as the polymer-solvent interaction parameter for chains of arbitrary scaling dimension using a hybrid Unified scattering function. The approach is demonstrated for linear, four-arm and eight-arm polyisoprene stars in deuterated p-xylene.

  17. Dual-scale multimedia dynamic synchronization model

    Institute of Scientific and Technical Information of China (English)

    李乃祥

    2009-01-01

    Multimedia synchronization is the key technology in application of distributed multimedia.Solution of synchronization conflicts insides and among streams as well as that of user interaction,synchronization granularity refinement and synchronization precision improvement remain great challenges although great efforts have been invested by the academic circle.The construction method of a dual-scale dynamic synchronous model of multimedia presented in this article realizes multimedia synchronization on two sca...

  18. a Detection Method of Artificial Area from High Resolution Remote Sensing Images Based on Multi Scale and Multi Feature Fusion

    Science.gov (United States)

    Li, P.; Hu, X.; Hu, Y.; Ding, Y.; Wang, L.; Li, L.

    2017-05-01

    In order to solve the problem of automatic detection of artificial objects in high resolution remote sensing images, a method for detection of artificial areas in high resolution remote sensing images based on multi-scale and multi feature fusion is proposed. Firstly, the geometric features such as corner, straight line and right angle are extracted from the original resolution, and the pseudo corner points, pseudo linear features and pseudo orthogonal angles are filtered out by the self-constraint and mutual restraint between them. Then the radiation intensity map of the image with high geometric characteristics is obtained by the linear inverse distance weighted method. Secondly, the original image is reduced to multiple scales and the visual saliency image of each scale is obtained by adaptive weighting of the orthogonal saliency, the local brightness and contrast which are calculated at the corresponding scale. Then the final visual saliency image is obtained by fusing all scales' visual saliency images. Thirdly, the visual saliency images of artificial areas based on multi scales and multi features are obtained by fusing the geometric feature energy intensity map and visual saliency image obtained in previous decision level. Finally, the artificial areas can be segmented based on the method called OTSU. Experiments show that the method in this paper not only can detect large artificial areas such as urban city, residential district, but also detect the single family house in the countryside correctly. The detection rate of artificial areas reached 92 %.

  19. Feature selection and survival modeling in The Cancer Genome Atlas

    Directory of Open Access Journals (Sweden)

    Kim H

    2013-09-01

    Full Text Available Hyunsoo Kim,1 Markus Bredel2 1Department of Pathology, The University of Alabama at Birmingham, Birmingham, AL, USA; 2Department of Radiation Oncology, and Comprehensive Cancer Center, The University of Alabama at Birmingham, Birmingham, AL, USA Purpose: Personalized medicine is predicated on the concept of identifying subgroups of a common disease for better treatment. Identifying biomarkers that predict disease subtypes has been a major focus of biomedical science. In the era of genome-wide profiling, there is controversy as to the optimal number of genes as an input of a feature selection algorithm for survival modeling. Patients and methods: The expression profiles and outcomes of 544 patients were retrieved from The Cancer Genome Atlas. We compared four different survival prediction methods: (1 1-nearest neighbor (1-NN survival prediction method; (2 random patient selection method and a Cox-based regression method with nested cross-validation; (3 least absolute shrinkage and selection operator (LASSO optimization using whole-genome gene expression profiles; or (4 gene expression profiles of cancer pathway genes. Results: The 1-NN method performed better than the random patient selection method in terms of survival predictions, although it does not include a feature selection step. The Cox-based regression method with LASSO optimization using whole-genome gene expression data demonstrated higher survival prediction power than the 1-NN method, but was outperformed by the same method when using gene expression profiles of cancer pathway genes alone. Conclusion: The 1-NN survival prediction method may require more patients for better performance, even when omitting censored data. Using preexisting biological knowledge for survival prediction is reasonable as a means to understand the biological system of a cancer, unless the analysis goal is to identify completely unknown genes relevant to cancer biology. Keywords: brain, feature selection

  20. A Simple Model of Low-scale Direct Gauge Mediation

    CERN Document Server

    Csáki, C; Terning, J; Cs\\'aki, Csaba; Shirman, Yuri; Terning, John

    2007-01-01

    We construct a calculable model of low-energy direct gauge mediation making use of the metastable supersymmetry breaking vacua recently discovered by Intriligator, Seiberg and Shih. The standard model gauge group is a subgroup of the global symmetries of the SUSY breaking sector and messengers play an essential role in dynamical SUSY breaking: they are composites of a confining gauge theory, and the holomorphic scalar messenger mass appears as a consequence of the confining dynamics. The SUSY breaking scale is around 100 TeV nevertheless the model is calculable. The minimal non-renormalizable coupling of the Higgs to the DSB sector leads in a simple way to a mu-term, while the B-term arises at two-loop order resulting in a moderately large tan beta. A novel feature of this class of models is that some particles from the dynamical SUSY breaking sector may be accessible at the LHC.

  1. Performance modeling of a feature-aided tracker

    Science.gov (United States)

    Goley, G. Steven; Nolan, Adam R.

    2012-06-01

    In order to provide actionable intelligence in a layered sensing paradigm, exploitation algorithms should produce a confidence estimate in addition to the inference variable. This article presents a methodology and results of one such algorithm for feature-aided tracking of vehicles in wide area motion imagery. To perform experiments a synthetic environment was developed, which provided explicit knowledge of ground truth, tracker prediction accuracy, and control of operating conditions. This synthetic environment leveraged physics-based modeling simulations to re-create both traffic flow, reflectance of vehicles, obscuration and shadowing. With the ability to control operating conditions as well as the availability of ground truth, several experiments were conducted to test both the tracker and expected performance. The results show that the performance model produces a meaningful estimate of the tracker performance over the subset of operating conditions.

  2. Numerical Predictions of Cavitating Flow around Model Scale Propellers by CFD and Advanced Model Calibration

    Directory of Open Access Journals (Sweden)

    Mitja Morgut

    2012-01-01

    Full Text Available The numerical predictions of the cavitating flow around two model scale propellers in uniform inflow are presented and discussed. The simulations are carried out using a commercial CFD solver. The homogeneous model is used and the influence of three widespread mass transfer models, on the accuracy of the numerical predictions, is evaluated. The mass transfer models in question share the common feature of employing empirical coefficients to adjust mass transfer rate from water to vapour and back, which can affect the stability and accuracy of the predictions. Thus, for a fair and congruent comparison, the empirical coefficients of the different mass transfer models are first properly calibrated using an optimization strategy. The numerical results obtained, with the three different calibrated mass transfer models, are very similar to each other for two selected model scale propellers. Nevertheless, a tendency to overestimate the cavity extension is observed, and consequently the thrust, in the most severe operational conditions, is not properly predicted.

  3. Feature selection versus feature compression in the building of calibration models from FTIR-spectrophotometry datasets.

    Science.gov (United States)

    Vergara, Alexander; Llobet, Eduard

    2012-01-15

    Undoubtedly, FTIR-spectrophotometry has become a standard in chemical industry for monitoring, on-the-fly, the different concentrations of reagents and by-products. However, representing chemical samples by FTIR spectra, which spectra are characterized by hundreds if not thousands of variables, conveys their own set of particular challenges because they necessitate to be analyzed in a high-dimensional feature space, where many of these features are likely to be highly correlated and many others surely affected by noise. Therefore, identifying a subset of features that preserves the classifier/regressor performance seems imperative prior any attempt to build an appropriate pattern recognition method. In this context, we investigate the benefit of utilizing two different dimensionality reduction methods, namely the minimum Redundancy-Maximum Relevance (mRMR) feature selection scheme and a new self-organized map (SOM) based feature compression, coupled to regression methods to quantitatively analyze two-component liquid samples utilizing FTIR spectrophotometry. Since these methods give us the possibility of selecting a small subset of relevant features from FTIR spectra preserving the statistical characteristics of the target variable being analyzed, we claim that expressing the FTIR spectra by these dimensionality-reduced set of features may be beneficial. We demonstrate the utility of these novel feature selection schemes in quantifying the distinct analytes within their binary mixtures utilizing a FTIR-spectrophotometer.

  4. Scale Factor Self-Dual Cosmological Models

    CERN Document Server

    dS, U Camara; Sotkov, G M

    2015-01-01

    We implement a conformal time scale factor duality for Friedmann-Robertson-Walker cosmological models, which is consistent with the weak energy condition. The requirement for self-duality determines the equations of state for a broad class of barotropic fluids. We study the example of a universe filled with two interacting fluids, presenting an accelerated and a decelerated period, with manifest UV/IR duality. The associated self-dual scalar field interaction turns out to coincide with the "radiation-like" modified Chaplygin gas models. We present an equivalent realization of them as gauged K\\"ahler sigma models (minimally coupled to gravity) with very specific and interrelated K\\"ahler- and super-potentials. Their applications in the description of hilltop inflation and also as quintessence models for the late universe are discussed.

  5. BUILDING ROBUST APPEARANCE MODELS USING ON-LINE FEATURE SELECTION

    Energy Technology Data Exchange (ETDEWEB)

    PORTER, REID B. [Los Alamos National Laboratory; LOVELAND, ROHAN [Los Alamos National Laboratory; ROSTEN, ED [Los Alamos National Laboratory

    2007-01-29

    In many tracking applications, adapting the target appearance model over time can improve performance. This approach is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g., orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In this paper we describe our efforts to track cars in low frame rate data (1 frame/second) acquired from a highly unstable airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose an online feature selection approach to this problem and investigate the performance and computational trade-offs with a real-world dataset.

  6. Full-Scale Tunnel (FST) model

    Science.gov (United States)

    1929-01-01

    Model of Full-Scale Tunnel (FST) under construction. On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel . 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. Small included angle for the exit cone; 2. Carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. Tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow. This model can be constructed in a comparatively short time, using 2 by 4 framing with matched sheathing inside, and where circular sections are desired they can be obtained by nailing sheet metal to wooden ribs, which can be cut on the band saw. It is estimated that three months will be required for the construction and testing of such a model and that the cost will be approximately three thousand dollars, one thousand dollars of which will be for the motors. No suitable location appears to exist in any of our present buildings, and it may be necessary to build it outside and cover it with a roof.' George Lewis responded immediately (June 27) granting the authority to proceed. He urged Langley to expedite construction and to employ extra carpenters if necessary. Funds for the model came from the FST project. In a 1979

  7. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  8. Scaling filtering and multiplicative cascade information integration techniques for geological, geophysical and geochemical data processing and geological feature recognition

    Science.gov (United States)

    Cheng, Q.

    2013-12-01

    This paper introduces several techniques recently developed based on the concepts of multiplicative cascade processes and multifractals for processing exploration geochemical and geophysical data for recognition of geological features and delineation of target areas for undiscovered mineral deposits. From a nonlinear point of view extreme geo-processes such as cloud formation, rainfall, hurricanes, flooding, landslides, earthquakes, igneous activities, tectonics and mineralization often show singular property that they may result in anomalous amounts of energy release or mass accumulation that generally are confined to narrow intervals in space or time. The end products of these non-linear processes have in common that they can be modeled as fractals or multifractals. Here we show that the three fundamental concepts of scaling in the context of multifractals: singularity, self-similarity and fractal dimension spectrum, make multifractal theory and methods useful for geochemical and geophysical data processing for general purposes of geological features recognition. These methods include: a local singularity analysis based on a area-density (C-A) multifractal model used as a scaling high-pass filtering technique capable of extracting weak signals caused by buried geological features; a suite of multifractal filtering techniques based on spectrum density - area (S-A) multifractal models implemented in various domain including frequency domain can be used for unmixing geochemical or geophysical fields according to distinct generalized self-similarities characterized in certain domain; and multiplicative cascade processes for integration of diverse evidential layers of information for prediction of point events such as location of mineral deposits. It is demonstrated by several case studies involving Fe, Sn, Mo-Ag and Mo-W mineral deposits that singularity method can be utilized to process stream sediment/soil geochemical data and gravity/aeromagnetic data as high

  9. Multi-scale atmospheric environment modelling for urban areas

    Directory of Open Access Journals (Sweden)

    A. A. Baklanov

    2009-04-01

    Full Text Available Modern supercomputers allow realising multi-scale systems for assessment and forecasting of urban meteorology, air pollution and emergency preparedness and considering nesting with obstacle-resolved models. A multi-scale modelling system with downscaling from regional to city-scale with the Environment – HIgh Resolution Limited Area Model (Enviro-HIRLAM and to micro-scale with the obstacle-resolved Micro-scale Model for Urban Environment (M2UE is suggested and demonstrated. The M2UE validation results versus the Mock Urban Setting Trial (MUST experiment indicate satisfactory quality of the model. Necessary conditions for the choice of nested models, building descriptions, areas and resolutions of nested models are analysed. Two-way nesting (up- and down-scaling, when scale effects both directions (from the meso-scale on the micro-scale and from the micro-scale on the meso-scale, is also discussed.

  10. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    OpenAIRE

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) dat...

  11. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  12. Digital Surface and Terrain Models (DSM,DTM), The DTM associated with the Base Mapping Program consists of mass points and breaklines used primarily for ortho rectification. The DTM specifications included all breaklines for all hydro and transportation features and are the source for the TIPS (Tenn, Published in 2007, 1:4800 (1in=400ft) scale, State of Tennessee.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Digital Surface and Terrain Models (DSM,DTM) dataset, published at 1:4800 (1in=400ft) scale, was produced all or in part from Orthoimagery information as of...

  13. Large scale features and assessment of spatial scale correspondence between TMPA and IMD rainfall datasets over Indian landmass

    Indian Academy of Sciences (India)

    R Uma; T V Lakshmi Kumar; M S Narayanan; M Rajeevan; Jyoti Bhate; K Niranjan Kumar

    2013-06-01

    Daily rainfall datasets of 10 years (1998–2007) of Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) version 6 and India Meteorological Department (IMD) gridded rain gauge have been compared over the Indian landmass, both in large and small spatial scales. On the larger spatial scale, the pattern correlation between the two datasets on daily scales during individual years of the study period is ranging from 0.4 to 0.7. The correlation improved significantly (∼0.9) when the study was confined to specific wet and dry spells each of about 5–8 days. Wavelet analysis of intraseasonal oscillations (ISO) of the southwest monsoon rainfall show the percentage contribution of the major two modes (30–50 days and 10–20 days), to be ranging respectively between ∼30–40% and 5–10% for the various years. Analysis of inter-annual variability shows the satellite data to be underestimating seasonal rainfall by ∼110 mm during southwest monsoon and overestimating by ∼150 mm during northeast monsoon season. At high spatio-temporal scales, viz., 1° × 1° grid, TMPA data do not correspond to ground truth. We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets are compatible with each other. This has been done by studying the contribution to total seasonal rainfall from different rainfall rate windows (at 1 mm intervals) on different spatial scales (at daily time scale). The compatibility spatial scale is seen to be beyond 5° × 5° average spatial scale over the Indian landmass. This will help to decide the usability of TMPA products, if averaged at appropriate spatial scales, for specific process studies, e.g., cloud scale, meso scale or synoptic scale.

  14. Fiber modeling and clustering based on neuroanatomical features.

    Science.gov (United States)

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2011-01-01

    DTI tractography allows unprecedented understanding of brain neural connectivity in-vivo by capturing water diffusion patterns in brain white-matter microstructures. However, tractography algorithms often output hundreds of thousands of fibers, rendering the computation needed for subsequent data analysis intractable. A remedy is to group the fibers into bundles using fiber clustering techniques. Most existing fiber clustering methods, however, rely on fiber geometrical information only by viewing fibers as curves in the 3D Euclidean space. The important neuroanatomical aspect of the fibers is mostly ignored. In this paper, neuroanatomical information is encapsulated in a feature vector called the associativity vector, which functions as the "fingerprint" for each fiber and depicts the connectivity of the fiber with respect to individual anatomies. Using the associativity vectors of fibers, we model the fibers as observations sampled from multivariate Gaussian mixtures in the feature space. An expectation-maximization clustering approach is then employed to group the fibers into 16 major bundles. Experimental results indicate that the proposed method groups the fibers into anatomically meaningful bundles, which are highly consistent across subjects.

  15. Persistent small-scale features in maps of the anisotropy of ocean surface velocities

    Science.gov (United States)

    Sen, A.; Arbic, B. K.; Scott, R. B.; Holland, C. L.; Logan, E.; Qiu, B.

    2006-12-01

    Much of the stirring and mixing in the upper ocean is due to geostrophically balanced mesoscale eddies. Ocean general circulation models commonly parameterize eddy effects and can aid in predicting dispersal of materials throughout the ocean or in predicting long-term climate change. Parameterizations of eddy mixing depend on the isotropy of the eddies. Motivated by this, we investigate the isotropy of oceanic mesoscale eddies with seven years of sea surface height data recorded by satellite altimeters. From these data, we determined a sea surface height anomaly, and surface geostrophic velocities u and v in the zonal (east-west) and meridional (north-south) directions, respectively. From the latter two quantities we can calculate zonal and meridional kinetic energies u2 and v2. Integrals of u2 and v2 around latitude bands 10 degrees wide are nearly equal, in contrast with the results of simple beta-plane geostrophic turbulence models, which suggest that zonal motions should predominate. Maps of the quantity u2-v2 (normalized by standard error) show fine-scale structures that persist over times longer than the lifespan of turbulent eddies. Thus the mesoscale eddy field is locally anisotropic almost everywhere. Further investigation into the causes of these small-scale structures is needed and may take advantage of animations of sea surface height, in which quasi- circular, westward-propagating eddies can easily be seen.

  16. Possibility Study of Scale Invariant Feature Transform (SIFT) Algorithm Application to Spine Magnetic Resonance Imaging.

    Science.gov (United States)

    Lee, Dong-Hoon; Lee, Do-Wan; Han, Bong-Soo

    2016-01-01

    The purpose of this study is an application of scale invariant feature transform (SIFT) algorithm to stitch the cervical-thoracic-lumbar (C-T-L) spine magnetic resonance (MR) images to provide a view of the entire spine in a single image. All MR images were acquired with fast spin echo (FSE) pulse sequence using two MR scanners (1.5 T and 3.0 T). The stitching procedures for each part of spine MR image were performed and implemented on a graphic user interface (GUI) configuration. Moreover, the stitching process is performed in two categories; manual point-to-point (mPTP) selection that performed by user specified corresponding matching points, and automated point-to-point (aPTP) selection that performed by SIFT algorithm. The stitched images using SIFT algorithm showed fine registered results and quantitatively acquired values also indicated little errors compared with commercially mounted stitching algorithm in MRI systems. Our study presented a preliminary validation of the SIFT algorithm application to MRI spine images, and the results indicated that the proposed approach can be performed well for the improvement of diagnosis. We believe that our approach can be helpful for the clinical application and extension of other medical imaging modalities for image stitching.

  17. Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks

    Institute of Scientific and Technical Information of China (English)

    LIN Min; WANG Gang; CHEN Tian-Lun

    2006-01-01

    A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays powerlaw behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.

  18. Genome-scale constraint-based modeling of Geobacter metallireducens

    Directory of Open Access Journals (Sweden)

    Famili Iman

    2009-01-01

    metabolic model, which provided a fast and cost-effective way to understand the metabolism of G. metallireducens. Conclusion We have developed a genome-scale metabolic model for G. metallireducens that features both metabolic similarities and differences to the published model for its close relative, G. sulfurreducens. Together these metabolic models provide an important resource for improving strategies on bioremediation and bioenergy generation.

  19. Elysium Region, Mars: Tests of Lithospheric Loading Models for the Formation of Tectonic Features

    Science.gov (United States)

    Hall, J. L.; Solomon, S. C.; Head, J. W.

    1985-01-01

    In an effort to constrain the tectonic history and mechanical properties of the lithosphere in the Elysium province, the stress fields predicted by different models are compared to the observed tectonic features of the region. The models are all products of volcanic loading of the Martian lithosphere, but at three different scales: global (Tharsis), regional (Elysium Planitia), and local (individual shields). Conclusions: The concentric graben surrounding Elysium Mons can be ascribed to the flexural response of an approximately 50-km-thick elastic lithosphere to loading by the volcano. No tectonic evidence for the support of larger-scale Elysium Planitia volcanic units by lithospheric flexure. The quasi-global loading of the Tharsis rise appears to have produced identifiable tectonic effects in the Elysium region.

  20. The scale-invariant scotogenic model

    Energy Technology Data Exchange (ETDEWEB)

    Ahriche, Amine [Department of Physics, University of Jijel,PB 98 Ouled Aissa, DZ-18000 Jijel (Algeria); The Abdus Salam International Centre for Theoretical Physics,Strada Costiera 11, I-34014, Trieste (Italy); McDonald, Kristian L. [ARC Centre of Excellence for Particle Physics at the Terascale,School of Physics, The University of Sydney,NSW 2006 (Australia); Nasri, Salah [Physics Department, UAE University,POB 17551, Al Ain (United Arab Emirates)

    2016-06-30

    We investigate a minimal scale-invariant implementation of the scotogenic model and show that viable electroweak symmetry breaking can occur while simultaneously generating one-loop neutrino masses and the dark matter relic abundance. The model predicts the existence of a singlet scalar (dilaton) that plays the dual roles of triggering electroweak symmetry breaking and sourcing lepton number violation. Important constraints are studied, including those from lepton flavor violating effects and dark matter direct-detection experiments. The latter turn out to be somewhat severe, already excluding large regions of parameter space. None the less, viable regions of parameter space are found, corresponding to dark matter masses below (roughly) 10 GeV and above 200 GeV.

  1. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2016-01-01

    been used as scaffolds for analysis of high throughput data to allow mechanistic interpretation of changes in expression. Finally, GEMs allow quantitative flux predictions using flux balance analysis (FBA). Here we critically review the requirements for successful FBA simulations of cancer cells......Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...... of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome...

  2. Thermal scale modeling of radiation-conduction-convection systems.

    Science.gov (United States)

    Shannon, R. L.

    1972-01-01

    Investigation of thermal scale modeling applied to radiation-conduction-convection systems with particular emphasis on the spacecraft cabin atmosphere/cabin wall thermal interface. The 'modified material preservation,' 'temperature preservation,' 'scaling compromises,' and 'Nusselt number preservation' scale modeling techniques and their inherent limitations and problem areas are described. The compromised scaling techniques of mass flux preservation and heat transfer coefficient preservation show promise of giving adequate thermal similitude while preserving both gas and temperature in the scale model. The use of these compromised scaling techniques was experimentally demonstrated in tests of full scale and 1/4 scale models. Correlation of test results for free and forced convection under various test conditions shows the effectiveness of these scaling techniques. It is concluded that either mass flux or heat transfer coefficient preservation may result in adequate thermal similitude depending on the system to be modeled. Heat transfer coefficient preservation should give good thermal similitude for manned spacecraft scale modeling applications.

  3. Large Scale, High Resolution, Mantle Dynamics Modeling

    Science.gov (United States)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  4. Feature extraction and models for speech: An overview

    Science.gov (United States)

    Schroeder, Manfred

    2002-11-01

    Modeling of speech has a long history, beginning with Count von Kempelens 1770 mechanical speaking machine. Even then human vowel production was seen as resulting from a source (the vocal chords) driving a physically separate resonator (the vocal tract). Homer Dudley's 1928 frequency-channel vocoder and many of its descendants are based on the same successful source-filter paradigm. For linguistic studies as well as practical applications in speech recognition, compression, and synthesis (see M. R. Schroeder, Computer Speech), the extant models require the (often difficult) extraction of numerous parameters such as the fundamental and formant frequencies and various linguistic distinctive features. Some of these difficulties were obviated by the introduction of linear predictive coding (LPC) in 1967 in which the filter part is an all-pole filter, reflecting the fact that for non-nasalized vowels the vocal tract is well approximated by an all-pole transfer function. In the now ubiquitous code-excited linear prediction (CELP), the source-part is replaced by a code book which (together with a perceptual error criterion) permits speech compression to very low bit rates at high speech quality for the Internet and cell phones.

  5. Feature network models for proximity data : statistical inference, model selection, network representations and links with related models

    NARCIS (Netherlands)

    Frank, Laurence Emmanuelle

    2006-01-01

    Feature Network Models (FNM) are graphical structures that represent proximity data in a discrete space with the use of features. A statistical inference theory is introduced, based on the additivity properties of networks and the linear regression framework. Considering features as predictor variab

  6. Scale-Bridging Model Development for Coal Particle Devolatilization

    CERN Document Server

    Schroeder, Benjamin B; Smith, Philip J; Fletcher, Thomas H; Packard, Andrew; Frenklach, Michael; Hegde, Arun; Li, Wenyu; Oreluk, James

    2016-01-01

    When performing large-scale, high-performance computations of multi-physics applications, it is common to limit the complexity of physics sub-models comprising the simulation. For a hierarchical system of coal boiler simulations a scale-bridging model is constructed to capture characteristics appropriate for the application-scale from a detailed coal devolatilization model. Such scale-bridging allows full descriptions of scale-applicable physics, while functioning at reasonable computational costs. This study presents a variation on multi-fidelity modeling with a detailed physics model, the chemical percolation devolatilization model, being used to calibrate a scale-briding model for the application of interest. The application space provides essential context for designing the scale-bridging model by defining scales, determining requirements and weighting desired characteristics. A single kinetic reaction equation with functional yield model and distributed activation energy is implemented to act as the scal...

  7. Small scale karst features (tube karren) as evidence of a latest Quaternary fossil landslide

    Science.gov (United States)

    Stöger, Tobias; Plan, Lukas; Draganits, Erich

    2017-04-01

    At least since 1933 numerous small dissolutional holes in the ceilings of overhangs and small caves have been known from a restricted area in the Northern Calcareous Alps in Lower Austria but not investigated yet. These tube-shaped structures are a few centimetres in diameter, more or less vertical, taper upwards, are closed at the top and penetrate some tens of centimetres into the Middle Triassic limestone. Very similar features were described by Simms (2002) from the shores of three lakes in western Ireland and termed Röhrenkarren or tube karren. According to his model they formed by condensation corrosion within air pockets trapped by seasonal floods. The features investigated in the present study occur on both sides of a valley in the north eastern part of the Northern Calcareous Alps south of the city Sankt Pölten. Presently there is no lake and so far no paleo lake is known from this area. Based on airborne laser scanning data and field observations in a narrow section of the valley downstream of the tube karren sites, a previously unknown potential fossil landslide was discovered. The clayey silty sediments upstream of the landslide are interpreted as palaeo-lake sediments. This interpretation is supported by the existence of abundant dragonfly eggs within these deposits. The same fine-grained sediments are partly also found inside the tube karren. These observations are interpreted that a landslide-dammed palaeo-lake formed due to the mass movement that blocked the river and the tube karren were formed by seasonal fluctuations of the lake level. Geochronological dating of calcite crusts covering the karren and of the organic material of the dragonfly eggs are on the way. As the karren features look quite fresh and unweathered and from the diffuse shape of the landslide a late Quaternary age is estimated. References Simms, M.J. 2002. The origin of enigmatic, tubular, lake-shore karren: a mechanism for rapid dissolution of limestone in carbonate

  8. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....

  9. Self adaptive multi-scale morphology AVG-Hat filter and its application to fault feature extraction for wheel bearing

    Science.gov (United States)

    Deng, Feiyue; Yang, Shaopu; Tang, Guiji; Hao, Rujiang; Zhang, Mingliang

    2017-04-01

    Wheel bearings are essential mechanical components of trains, and fault detection of the wheel bearing is of great significant to avoid economic loss and casualty effectively. However, considering the operating conditions, detection and extraction of the fault features hidden in the heavy noise of the vibration signal have become a challenging task. Therefore, a novel method called adaptive multi-scale AVG-Hat morphology filter (MF) is proposed to solve it. The morphology AVG-Hat operator not only can suppress the interference of the strong background noise greatly, but also enhance the ability of extracting fault features. The improved envelope spectrum sparsity (IESS), as a new evaluation index, is proposed to select the optimal filtering signal processed by the multi-scale AVG-Hat MF. It can present a comprehensive evaluation about the intensity of fault impulse to the background noise. The weighted coefficients of the different scale structural elements (SEs) in the multi-scale MF are adaptively determined by the particle swarm optimization (PSO) algorithm. The effectiveness of the method is validated by analyzing the real wheel bearing fault vibration signal (e.g. outer race fault, inner race fault and rolling element fault). The results show that the proposed method could improve the performance in the extraction of fault features effectively compared with the multi-scale combined morphological filter (CMF) and multi-scale morphology gradient filter (MGF) methods.

  10. A Regional Model Study of Synoptic Features Over West Africa

    Science.gov (United States)

    Druyan, Leonard M.; Fulakeza, Matthew; Lonergan, Patrick; Saloum, Mahaman; Hansen, James E. (Technical Monitor)

    2001-01-01

    Synoptic weather features over West Africa were studied in simulations by the regional simulation model (RM) at the NASA/Goddard Institute for Space Studies. These pioneering simulations represent the beginning of an effort to adapt regional models for weather and climate prediction over West Africa. The RM uses a cartesian grid with 50 km horizontal resolution and fifteen vertical levels. An ensemble of four simulations was forced with lateral boundary conditions from ECMWF global analyses for the period 8-22 August 1988. The simulated mid-tropospheric circulation includes the skillful development and movement of several African wave disturbances. Wavelet analysis of mid-tropospheric winds detected a dominant periodicity of about 4 days and a secondary periodicity of 5-8 days. Spatial distributions of RM precipitation and precipitation time series were validated against daily rain gauge measurements and ISCCP satellite infrared cloud imagery. The time-space distribution of simulated precipitation was made more realistic by combining the ECMWR initial conditions with a 24-hr spin-up of the moisture field and also by damping high frequency gravity waves by dynamic initialization. Model precipitation "forecasts" over the Central Sahel were correlated with observations for about three days, but reinitializing with observed data on day 5 resulted in a dramatic improvement in the precipitation validation over the remaining 9 days. Results imply that information via the lateral boundary conditions is not always sufficient to minimize departures between simulated and actual precipitation patterns for more than several days. In addition, there was some evidence that the new initialization may increase the simulations' sensitivity to the quality of lateral boundary conditions.

  11. A Novel Approach in Quantifying the Effect of Urban Design Features on Local-Scale Air Pollution in Central Urban Areas.

    Science.gov (United States)

    Miskell, Georgia; Salmond, Jennifer; Longley, Ian; Dirks, Kim N

    2015-08-04

    Differences in urban design features may affect emission and dispersion patterns of air pollution at local-scales within cities. However, the complexity of urban forms, interdependence of variables, and temporal and spatial variability of processes make it difficult to quantify determinants of local-scale air pollution. This paper uses a combination of dense measurements and a novel approach to land-use regression (LUR) modeling to identify key controls on concentrations of ambient nitrogen dioxide (NO2) at a local-scale within a central business district (CBD). Sixty-two locations were measured over 44 days in Auckland, New Zealand at high density (study area 0.15 km(2)). A local-scale LUR model was developed, with seven variables identified as determinants based on standard model criteria. A novel method for improving standard LUR design was developed using two independent data sets (at local and "city" scales) to generate improved accuracy in predictions and greater confidence in results. This revised multiscale LUR model identified three urban design variables (intersection, proximity to a bus stop, and street width) as having the more significant determination on local-scale air quality, and had improved adaptability between data sets.

  12. Modeling Research on Manufacturing Execution System Based on Large-scale System Cybernetics

    Institute of Scientific and Technical Information of China (English)

    WU Yu; XU Xiao-dong; LI Cong-xin

    2008-01-01

    A cybernetics model of manufacturing execution system (MES_CM) was proposed and studied from the viewpoint of cybernetics.Combining with the features of manufacturing system,the MES_CM was modeled by "generalized modeling" method that is discussed in large-scale system theory.The mathematical model of MES_CM was constructed by the generalized operator model,and the main characteristics of MES_CM were analyzed.

  13. Multi-Scale Modeling of Plasma Thrusters

    Science.gov (United States)

    Batishchev, Oleg

    2004-11-01

    Plasma thrusters are characterized with multiple spatial and temporal scales, which are due to the intrinsic physical processes such as gas ionization, wall effects and plasma acceleration. Characteristic times for hot plasma and cold gas are differing by 6-7 orders of magnitude. The typical collisional mean-free-paths vary by 3-5 orders along the devices. These make questionable a true self-consistent modeling of the thrusters. The latter is vital to the understanding of complex physics, non-linear dynamics and optimization of the performance. To overcome this problem we propose the following approach. All processes are divided into two groups: fast and slow. The slow ones include gas evolution with known sources and ionization sink. The ionization rate, transport coefficients, energy sources are defined during "fast step". Both processes are linked through external iterations. Multiple spatial scales are handled using moving adaptive mesh. Development and application of this method to the VASIMR helicon plasma source and other thrusters will be discussed. Supported by NASA.

  14. Latent Feature Models for Uncovering Human Mobility Patterns from Anonymized User Location Traces with Metadata

    KAUST Repository

    Alharbi, Basma Mohammed

    2017-04-10

    In the mobile era, data capturing individuals’ locations have become unprecedentedly available. Data from Location-Based Social Networks is one example of large-scale user-location data. Such data provide a valuable source for understanding patterns governing human mobility, and thus enable a wide range of research. However, mining and utilizing raw user-location data is a challenging task. This is mainly due to the sparsity of data (at the user level), the imbalance of data with power-law users and locations check-ins degree (at the global level), and more importantly the lack of a uniform low-dimensional feature space describing users. Three latent feature models are proposed in this dissertation. Each proposed model takes as an input a collection of user-location check-ins, and outputs a new representation space for users and locations respectively. To avoid invading users privacy, the proposed models are designed to learn from anonymized location data where only IDs - not geophysical positioning or category - of locations are utilized. To enrich the inferred mobility patterns, the proposed models incorporate metadata, often associated with user-location data, into the inference process. In this dissertation, two types of metadata are utilized to enrich the inferred patterns, timestamps and social ties. Time adds context to the inferred patterns, while social ties amplifies incomplete user-location check-ins. The first proposed model incorporates timestamps by learning from collections of users’ locations sharing the same discretized time. The second proposed model also incorporates time into the learning model, yet takes a further step by considering time at different scales (hour of a day, day of a week, month, and so on). This change in modeling time allows for capturing meaningful patterns over different times scales. The last proposed model incorporates social ties into the learning process to compensate for inactive users who contribute a large volume

  15. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...... of experimental task (i.e., real-time vs. annotated segmentation), nor of musicianship on boundary perception are clear. Our study assesses musicianship effects and differences between segmentation tasks. We conducted a real-time experiment to collect segmentations by musicians and nonmusicians from nine musical...... pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary...

  16. FEATURES OF THE PROCESS MODEL FOR PENITENTIARY EDUCATION SYSTEM DIVERSIFICATION

    Directory of Open Access Journals (Sweden)

    Neile Kayumovna Schepkina

    2014-01-01

    Full Text Available The article covers features of the process model for penitentiary education system diversification. Issues of prison inmate education are of contemporary relevance over the past 30 years since criminal-executive system has undergone a number of changes due to changes and amendments to criminal laws and rules of proceedings, including those affected by international standards, European Prison Rules ensuring the rights of imprisoned persons to education. Russian criminal-executive, court supervision and correctional system adopted to have been implemented till 2020 provides qualitative changes in approaches related to practices of serving sentences and measures to prevent recidivism.Creating a set of incentives for social adaptation of a special group of inmates, while serving their sentences and after it, is the basic category in the range of initiatives that currently have been considered in terms of developing penitentiary system. One of the most significant ones among them is the incentive to take advantage of the educational opportunities available to them in prison.

  17. A Framework for Evaluating Regional-Scale Numerical Photochemical Modeling Systems

    Science.gov (United States)

    This paper discusses the need for critically evaluating regional-scale (~ 200-2000 km) three dimensional numerical photochemical air quality modeling systems to establish a model's credibility in simulating the spatio-temporal features embedded in the observations. Because of li...

  18. A study of key features of random atmospheric disturbance models for the approach flight phase

    Science.gov (United States)

    Heffley, R. K.

    1977-01-01

    An analysis and brief simulator experiment were performed to identify and classify important features of random turbulence for the landing approach flight phase. The analysis of various wind models was carried out within the context of the longitudinal closed-loop pilot/vehicle system. The analysis demonstrated the relative importance of atmospheric disturbance scale lengths, horizontal versus vertical gust components, decreasing altitude, and spectral forms of disturbances versus the pilot/vehicle system. Among certain competing wind models, the analysis predicted no significant difference in pilot performance. This was confirmed by a moving base simulator experiment which evaluated the two most extreme models. A number of conclusions were reached: attitude constrained equations do provide a simple but effective approach to describing the closed-loop pilot/vehicle. At low altitudes the horizontal gust component dominates pilot/vehicle performance.

  19. Design Intent for CAD Modeling Features Using Boolean Operations

    Science.gov (United States)

    Sonawane, Chandrakant R.; Sujit, Ghadge

    2017-05-01

    The objective of this paper is to add one more enhancement to design intent by adding a rule to find the intersection edges created between Boolean features. Design Intent is a core module in CAD software which is used for smart design of referencing elements to create required features. In general, the particular design intent will form a particular rule which can be utilized for specified purpose. In this paper, a design intent rule for Edge Blend feature is designed. The rule is also implemented and integrated with CAD software. The major contributions of this paper is to create a new intent design rule which picks the edges of the feature Present design intent rule is intended to pick the intersection edges of the feature, in doing so; the intent will avoid referencing to topology over referencing to hierarchy objects for greater reliability

  20. Catchment scale tracer testing from karstic features in a porous limestone

    Science.gov (United States)

    Maurice, L.; Atkinson, T. C.; Williams, A. T.; Barker, J. A.; Farrant, A. R.

    2010-07-01

    SummaryTracer testing was undertaken from sinking streams feeding the Chalk, a porous limestone aquifer characterised by frequent small-scale surface karst features. The objective was to investigate the nature and extent of sub-surface karstic development in the aquifer. Previous tracer testing has demonstrated rapid flow combined with low attenuation of tracer. In this study, at two sites rapid groundwater flow was combined with very high attenuation and at two other sites no tracer was detected at springs within the likely catchment area of the stream sinks tested, suggesting that tracer was totally attenuated along the flowpath. It is proposed that the networks beneath stream sinks in the Chalk and other mildly karstic aquifers distribute recharge into multiple enlarged fractures that divide and become smaller at each division whereas the networks around springs have a predominantly tributary topology that concentrates flow into a few relatively large cavities, a morphology with similarities to that of the early stages of karstification. Tracer attenuation is controlled by the degree to which the two networks are directly connected. In the first state, there is no direct linkage and flow between the two networks is via primary fractures in which tracer attenuation is extreme. The second state is at a percolation threshold in which a single direct link joins the two networks. A very small proportion of tracer reaches the spring rapidly but overall attenuation is very high. In the third state, the recharge and discharge networks are integrated therefore a large fraction of tracer reaches the spring and peak concentrations are relatively high. Despite the large number of stream sinks that recharge the Chalk aquifer, these results suggest that sub-surface conduit development may not always be continuous, with flow down smaller fissures and fractures causing high attenuation of solutes and particulates providing a degree of protection to groundwater outlets that is

  1. Computer aided polymer design using multi-scale modelling

    Directory of Open Access Journals (Sweden)

    K. C. Satyanarayana

    2010-09-01

    Full Text Available The ability to predict the key physical and chemical properties of polymeric materials from their repeat-unit structure and chain-length architecture prior to synthesis is of great value for the design of polymer-based chemical products, with new functionalities and improved performance. Computer aided molecular design (CAMD methods can expedite the design process by establishing input-output relations between the type and number of functional groups in a polymer repeat unit and the desired macroscopic properties. A multi-scale model-based approach that combines a CAMD technique based on group contribution plus models for predicting polymer repeat unit properties with atomistic simulations for providing first-principles arrangements of the repeat units and for predictions of physical properties of the chosen candidate polymer structures, has been developed and tested for design of polymers with desired properties. A case study is used to highlight the main features of this multi-scale model-based approach for the design of a polymer-based product.

  2. A compact 341 model at TeV scale

    CERN Document Server

    Dias, A G; Pires, C A de S; da Silva, P S Rodrigues

    2013-01-01

    We build a gauge model based on the SU(3)_c x SU(4)_L x U(1)_X symmetry where the scalar spectrum needed to generate gauge boson and fermion masses has a smaller scalar content than usually assumed in literature. We compute the running of its abelian gauge coupling and show that a Landau pole shows up at the TeV scale, a fact that we use to consistently implement those fermion masses that are not generated by Yukawa interactions, including neutrino masses. This is appropriately achieved by non renormalizable effective operators, supressed by the Landau pole. Also, SU(3)_c x SU(3)_L x U(1)_N models embedded in this gauge structure are bound to be strongly coupled at this same energy scale, contrary to what is generally believed, and neutrino mass generation is rather explained through the same effective operators used in the larger gauge group. Besides, their nice features, as the existence of cold dark matter candidates and the ability to reproduce the observed standard model Higgs-like phenomenology, are aut...

  3. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...

  4. Mapping mantle flow during retreating subduction: Laboratory models analyzed by feature tracking

    Science.gov (United States)

    Funiciello, F.; Moroni, M.; Piromallo, C.; Faccenna, C.; Cenedese, A.; Bui, H. A.

    2006-03-01

    Three-dimensional dynamically consistent laboratory models are carried out to model the large-scale mantle circulation induced by subduction of a laterally migrating slab. A laboratory analogue of a slab-upper mantle system is set up with two linearly viscous layers of silicone putty and glucose syrup in a tank. The circulation pattern is continuously monitored and quantitatively estimated using a feature tracking image analysis technique. The effects of plate width and mantle viscosity/density on mantle circulation are systematically considered. The experiments show that rollback subduction generates a complex three-dimensional time-dependent mantle circulation pattern characterized by the presence of two distinct components: the poloidal and the toroidal circulation. The poloidal component is the answer to the viscous coupling between the slab motion and the mantle, while the toroidal one is produced by lateral slab migration. Spatial and temporal features of mantle circulation are carefully analyzed. These models show that (1) poloidal and toroidal mantle circulation are both active since the beginning of the subduction process, (2) mantle circulation is intermittent, (3) plate width affects the velocity and the dimension of subduction induced mantle circulation area, and (4) mantle flow in subduction zones cannot be correctly described by models assuming a two-dimensional steady state process. We show that the intermittent toroidal component of mantle circulation, missed in those models, plays a crucial role in modifying the geometry and the efficiency of the poloidal component.

  5. Daily reservoir inflow forecasting using multiscale deep feature learning with hybrid models

    Science.gov (United States)

    Bai, Yun; Chen, Zhiqiang; Xie, Jingjing; Li, Chuan

    2016-01-01

    Inflow forecasting applies data supports for the operations and managements of reservoirs. A multiscale deep feature learning (MDFL) method with hybrid models is proposed in this paper to deal with the daily reservoir inflow forecasting. Ensemble empirical mode decomposition and Fourier spectrum are first employed to extract multiscale (trend, period and random) features, which are then represented by three deep belief networks (DBNs), respectively. The weights of each DBN are subsequently applied to initialize a neural network (D-NN). The outputs of the three-scale D-NNs are finally reconstructed using a sum-up strategy toward the forecasting results. A historical daily inflow series (from 1/1/2000 to 31/12/2012) of the Three Gorges reservoir, China, is investigated by the proposed MDFL with hybrid models. For comparison, four peer models are adopted for the same task. The results show that, the present model overwhelms all the peer models in terms of mean absolute percentage error (MAPE = 11.2896%), normalized root-mean-square error (NRMSE = 0.2292), determination coefficient criteria (R2 = 0.8905), and peak percent threshold statistics (PPTS(5) = 10.0229%). The addressed method integrates the deep framework with multiscale and hybrid observations, and therefore being good at exploring sophisticated natures in the reservoir inflow forecasting.

  6. Axion Models with High Scale Inflation

    CERN Document Server

    Moroi, Takeo; Nakayama, Kazunori; Takimoto, Masahiro

    2014-01-01

    We revisit the cosmological aspects of axion models. In the high-scale inflation scenario, the Peccei-Quinn (PQ) symmetry is likely to be restored during/after inflation. If the curvature of the PQ scalar potential at the origin is smaller than its vacuum expectation value; for instance in a class of SUSY axion models, thermal inflation happens before the radial component of the PQ scalar (saxion) relaxes into the global minimum of the potential and the decay of saxion coherent oscillation would produce too much axion dark radiation. In this paper, we study how to avoid the overproduction of axion dark radiation with some concrete examples. We show that, by taking account of the finite-temperature dissipation effect appropriately, the overproduction constraint can be relaxed since the PQ scalar can take part in the thermal plasma again even after the PQ phase transition. We also show that it can be further relaxed owing to the late time decay of another heavy CP-odd scalar, if it is present.

  7. Fin Buffeting Features of an Early F-22 Model

    Science.gov (United States)

    Moses, Robert W.; Huttsell, Lawrence

    2000-01-01

    Fin buffeting is an aeroelastic phenomenon encountered by high performance aircraft, especially those with twin vertical tails that must operate at high angles of attack. This buffeting is a concern from fatigue and inspection points of view. To date, the buffet (unsteady pressures) and buffeting (structural response) characteristics of the F-15 and F/A-18 fins have been studied extensively using flow visualization, flow velocity measurements, pressure transducers, and response gages. By means of windtunnel and flight tests of the F-15 and F/A-18, this phenomenon is well studied to the point that buffet loads can be estimated and fatigue life can he increased by structural enhancements to these airframes. However, prior to the present research, data was not available outside the F-22 program regarding fin buffeting on the F-22 configuration. During a test in the Langley Transonic Dynamics Tunnel, flow visualization and unsteady fin surface pressures were recorded for a 13.3%-scale F-22 model at high angles of attack for the purpose of comparing with results available for similar aircraft configurations. Details of this test and fin buffeting are presented herein.

  8. Phenomenology of dark energy: general features of large-scale perturbations

    Science.gov (United States)

    Pèrenon, Louis; Piazza, Federico; Marinoni, Christian; Hui, Lam

    2015-11-01

    We present a systematic exploration of dark energy and modified gravity models containing a single scalar field non-minimally coupled to the metric. Even though the parameter space is large, by exploiting an effective field theory (EFT) formulation and by imposing simple physical constraints such as stability conditions and (sub-)luminal propagation of perturbations, we arrive at a number of generic predictions. (1) The linear growth rate of matter density fluctuations is generally suppressed compared to ΛCDM at intermediate redshifts (0.5 lesssim z lesssim 1), despite the introduction of an attractive long-range scalar force. This is due to the fact that, in self-accelerating models, the background gravitational coupling weakens at intermediate redshifts, over-compensating the effect of the attractive scalar force. (2) At higher redshifts, the opposite happens; we identify a period of super-growth when the linear growth rate is larger than that predicted by ΛCDM. (3) The gravitational slip parameter η—the ratio of the space part of the metric perturbation to the time part—is bounded from above. For Brans-Dicke-type theories η is at most unity. For more general theories, η can exceed unity at intermediate redshifts, but not more than about 1.5 if, at the same time, the linear growth rate is to be compatible with current observational constraints. We caution against phenomenological parametrization of data that do not correspond to predictions from viable physical theories. We advocate the EFT approach as a way to constrain new physics from future large-scale-structure data.

  9. A Bayesian Mixture Model for PoS Induction Using Multiple Features

    OpenAIRE

    Christodoulopoulos, Christos; Goldwater, Sharon; Steedman, Mark

    2011-01-01

    In this paper we present a fully unsupervised syntactic class induction system formulated as a Bayesian multinomial mixture model, where each word type is constrained to belong to a single class. By using a mixture model rather than a sequence model (e.g., HMM), we are able to easily add multiple kinds of features, including those at both the type level (morphology features) and token level (context and alignment features, the latter from parallel corpora). Using only context features, our sy...

  10. Wavelet-based Characterization of Small-scale Solar Emission Features at Low Radio Frequencies

    Science.gov (United States)

    Suresh, A.; Sharma, R.; Oberoi, D.; Das, S. B.; Pankratius, V.; Timar, B.; Lonsdale, C. J.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Corey, B. E.; Deshpande, A. A.; Emrich, D.; Goeke, R.; Greenhill, L. J.; Hazelton, B. J.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kratzenberg, E.; Lynch, M. J.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Ord, S. M.; Prabu, T.; Rogers, A. E. E.; Roshi, A.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tingay, S. J.; Waterson, M.; Wayth, R. B.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.

    2017-07-01

    Low radio frequency solar observations using the Murchison Widefield Array have recently revealed the presence of numerous weak short-lived narrowband emission features, even during moderately quiet solar conditions. These nonthermal features occur at rates of many thousands per hour in the 30.72 MHz observing bandwidth, and hence necessarily require an automated approach for their detection and characterization. Here, we employ continuous wavelet transform using a mother Ricker wavelet for feature detection from the dynamic spectrum. We establish the efficacy of this approach and present the first statistically robust characterization of the properties of these features. In particular, we examine distributions of their peak flux densities, spectral spans, temporal spans, and peak frequencies. We can reliably detect features weaker than 1 SFU, making them, to the best of our knowledge, the weakest bursts reported in literature. The distribution of their peak flux densities follows a power law with an index of -2.23 in the 12-155 SFU range, implying that they can provide an energetically significant contribution to coronal and chromospheric heating. These features typically last for 1-2 s and possess bandwidths of about 4-5 MHz. Their occurrence rate remains fairly flat in the 140-210 MHz frequency range. At the time resolution of the data, they appear as stationary bursts, exhibiting no perceptible frequency drift. These features also appear to ride on a broadband background continuum, hinting at the likelihood of them being weak type-I bursts.

  11. Development of a Scale Model for High Flux Isotope Reactor Cycle 400

    Energy Technology Data Exchange (ETDEWEB)

    Ilas, Dan [ORNL

    2012-03-01

    The development of a comprehensive SCALE computational model for the High Flux Isotope Reactor (HFIR) is documented and discussed in this report. The SCALE model has equivalent features and functionality as the reference MCNP model for Cycle 400 that has been used extensively for HFIR safety analyses and for HFIR experiment design and analyses. Numerical comparisons of the SCALE and MCNP models for the multiplication constant, power density distribution in the fuel, and neutron fluxes at several locations in HFIR indicate excellent agreement between the results predicted with the two models. The SCALE HFIR model is presented in sufficient detail to provide the users of the model with a tool that can be easily customized for various safety analysis or experiment design requirements.

  12. Linking local riverbed flow patterns and pore-water chemistry to hydrogeologic and geomorphic features across scales

    Science.gov (United States)

    Ibrahim, T. G.; Thornton, S.; Surridge, B.; Wainwright, J.

    2009-12-01

    The groundwater-surface water interface (GSI) is a critical environmental hotspot, a key area influencing the fate of carbon, nutrients and contaminants of surface and subsurface origin, and a zone of ecological importance. Policy seeking to mitigate issues relating to dissolved contaminants and to improve stream health, increasingly recognizes its significance, particularly in the context of integrated management of streams and aquifers. Techniques assessing riverbed flow and solute patterns are often limited to the local scale. When related to the multi-scale pattern of hydrogeologic and geomorphic features controlling stream, hyporheic and groundwater fluxes, they can improve larger scale predictions of flow and solute behaviour at the GSI. This study develops a conceptual model of riverbed flow and solute patterns, and tests it in a 4th order stream in the UK. It assesses the interaction between large scale subsurface flowpaths, driven by the distribution of bedrock outcrops, and the expansion and closure of alluvial deposits, and small-scale hyporheic flowpaths, driven by riffle-pool sequences. It uses two networks of riverbed mini-piezometers and multi-level samplers: network 1, across fifteen sites in a 7.2 km length of river in unconstrained (open alluvial valley), asymmetric (bedrock outcropping on one bank) and constrained (bedrock on both banks) contexts; and network 2, across six riffle-pool sequences in a 350-m reach, at the transition between asymmetric/unconstrained and constrained contexts. Subsurface flowpaths and stream-water infiltration were deduced by relating vertical exchange fluxes to stream and pore-water patterns of conservative natural tracers. Biogeochemical processes were highlighted using reactive natural tracers. At network 2, measurements of surface water profiles and riverbed coring were also undertaken, and dissolved metal concentrations in the first 15 cm of sediments assessed using gel probes. Network 1 was sampled twice. Monthly

  13. Global-scale modeling of groundwater recharge

    Science.gov (United States)

    Döll, P.; Fiedler, K.

    2008-05-01

    Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources) that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps). The estimate was obtained using two state-of-the-art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961-1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3/(capita yr) for Egypt to more than 1 million m3

  14. Lithospheric scale model of Merida Andes, Venezuela (GIAME Project)

    Science.gov (United States)

    Schmitz, M.; Orihuela, N. D.; Klarica, S.; Gil, E.; Levander, A.; Audemard, F. A.; Mazuera, F.; Avila, J.

    2013-05-01

    Merida Andes (MA) is one of the most important orogenic belt in Venezuela and represents the northern culmination of South America Andes. During the last 60 years, several models have been proposed to explain the shallow and deep structure, using different geological, geophysical, seismological, geochemical and petrologic concepts; nevertheless, most of them have applied local observation windows, and do not represent the major structure of MA. Therefore, a multidisciplinary research group, coordinated by FUNVISIS, in close cooperation with UCV, ULA and PDVSA, is proposed in order to get the outlined goals in the project entitled GIAME ("Geociencia Integral de los Andes de MErida") was established, which aims to generate a lithospheric scale model and the development of a temporal dynamic model for the MA. As a base for lithospheric investigations of the Merida Andes, we are proposing three wide angle seismic profiles across the orogen on three representative sites, in order to determine the inner structure and its relation with the orogen's gravimetric root. To the date, there are no seismic studies at lithospheric scale which cross MA. The wide angle seismic will be complemented with the re-processing and re-interpretation of existing reflection seismic data, which will allow to establish a relationship between MA and its associated flexural basins (Maracaibo and Barinas-Apure basins). Depending on the results of the VENCORP Project (VENezuelan COntinental Reflection Profiling), which might show some reliable results about crustal features and Moho reflectors along three long seismic profiles at Caribbean Moutain system, a reflection seismic profile across the central portion of MA is proposed. Additional tasks, consisting in MA quaternary deformation studies, using research methods like neotectonics and paleoseismology, georadar, numerical modeling, cinematic GPS, SAR interferometry, thermocronology, detailed studies on regional geology, flexural modeling

  15. Scale effects and scaling-up by geometric-optical model

    Institute of Scientific and Technical Information of China (English)

    李小文; 王锦地; A.H.Strahler

    2000-01-01

    This is a follow-up paper to our "Scale effect of Planck’s law over nonisothermal blackbody surface". More examples are used to describe the scale effect in detail, and the scaling-up of Planck law over blackbody surface is further extended to three-dimension nonisothermal surface. This scaling-up results in a conceptual model for the directionality and spectral signature of thermal radiation at the scale of remote sensing pixels. This new model is also an improvement of Li-Strahler-Friedl conceptual model in a sense that the new model needs only statistic parameters at the pixel scale, without request of sub-pixel scale parameters as the LSF model does.

  16. Scale effects and scaling-up by geometric-optical model

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This is a follow-up paper to our "Scale effect of Planck's law over nonisothermal blackbody surface".More examples are used to describe the scale effect in detail,and the scaling-up of Planck law over blackbody surface is further extended to three-dimension nonisothermal surface.This scaling-up results in a conceptual model for the directionality and spectral signature of thermal radiation at the scale of remote sensing pixels.This new model is also an improvement of Li-Strahler-Friedl conceptual model in a sense that the new model needs only statistic parameters at the pixel scale,without request of sub-pixel scale parameters as the LSF model does.

  17. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    Science.gov (United States)

    Roffler, Gretchen H.; Schwartz, Michael K.; Pilgrim, Kristy L.; Talbot, Sandra; Sage, Kevin; Adams, Layne G.; Luikart, Gordon

    2016-01-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is important to consider explicit assumptions and spatial scales of measurement. We calculated pairwise genetic distance among 301 Dall's sheep (Ovis dalli dalli) in southcentral Alaska using an intensive noninvasive sampling effort and 15 microsatellite loci. We used multiple regression of distance matrices to assess the correlation of pairwise genetic distance and landscape resistance derived from an RSF, and combinations of landscape features hypothesized to influence dispersal. Dall's sheep gene flow was positively correlated with steep slopes, moderate peak normalized difference vegetation indices (NDVI), and open land cover. Whereas RSF covariates were significant in predicting genetic distance, the RSF model itself was not significantly correlated with Dall's sheep gene flow, suggesting that certain habitat features important during summer (rugged terrain, mid-range elevation) were not influential to effective dispersal. This work underscores that consideration of both habitat selection and landscape genetics models may be useful in developing management strategies to both meet the immediate survival of a species and allow for long-term genetic connectivity.

  18. Modeling ionospheric disturbance features in quasi-vertically incident ionograms using 3-D magnetoionic ray tracing and atmospheric gravity waves

    Science.gov (United States)

    Cervera, M. A.; Harris, T. J.

    2014-01-01

    The Defence Science and Technology Organisation (DSTO) has initiated an experimental program, Spatial Ionospheric Correlation Experiment, utilizing state-of-the-art DSTO-designed high frequency digital receivers. This program seeks to understand ionospheric disturbances at scales employ a 3-D magnetoionic Hamiltonian ray tracing engine, developed by DSTO, to (1) model the various disturbance features observed on both the O and X polarization modes in our QVI data and (2) understand how they are produced. The ionospheric disturbances which produce the observed features were modeled by perturbing the ionosphere with atmospheric gravity waves.

  19. Beyond the Standard Model new physics at the electroweak scale

    CERN Document Server

    Masiero, Antonio

    1997-01-01

    A critical reappraisal of the Standard Model (SM) will force us to new physics beyond it. I will argue that we have good reasons to believe that the latter is likely to lie close to the electroweak scale. After discussing the possibility that such new physics may be linked to a dynamical breaking of SU(2)xU(1) (technicolour), I will come to the core of the course: low energy supersymmetry. I will focus on the main phenomenological features, while emphasizing the relevant differences for various options of supersymmetrization of the SM. In particular the economical (but very particular) minimal SUSY SM (MSSM)will be discussed in detail. Some touchy issues for SUSY like the flavour problem or matter stability will be adressed. I will conclude with the prospects for SUSY searches in high-energy accelerators, B-factories and non-accelerator physics.

  20. Testing the Psychometric Features of the Academic Intellectual Leadership Scale in a University Environment

    Science.gov (United States)

    Uslu, Baris

    2015-01-01

    The purpose of this research is to develop a scale for measuring the level of academics' intellectual leadership, test the scale by examining the influence of their personal and institutional characteristics, and then investigate the relationship of academic intellectual leadership (AIL) to communication, climate, and managerial flexibility…

  1. Intervention Validity of Social Behavior Rating Scales: Features of Assessments that Link Results to Treatment Plans

    Science.gov (United States)

    Elliott, Stephen N.; Gresham, Frank M.; Frank, Jennifer L.; Beddow, Peter A., III

    2008-01-01

    The term "intervention validity" refers to the extent to which assessment results can be used to guide the selection of interventions and evaluation of outcomes. In this article, the authors review the defining attributes of rating scales that distinguish them from other assessment tools, assumptions regarding the use of rating scales to measure…

  2. Multi-Level and Multi-Scale Feature Aggregation Using Pretrained Convolutional Neural Networks for Music Auto-Tagging

    Science.gov (United States)

    Lee, Jongpil; Nam, Juhan

    2017-08-01

    Music auto-tagging is often handled in a similar manner to image classification by regarding the 2D audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstractions. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. The architecture is trained in three steps. First, we conduct supervised feature learning to capture local audio features using a set of CNNs with different input sizes. Second, we extract audio features from each layer of the pre-trained convolutional networks separately and aggregate them altogether given a long audio clip. Finally, we put them into fully-connected networks and make final predictions of the tags. Our experiments show that using the combination of multi-level and multi-scale features is highly effective in music auto-tagging and the proposed method outperforms previous state-of-the-arts on the MagnaTagATune dataset and the Million Song Dataset. We further show that the proposed architecture is useful in transfer learning.

  3. Islands Climatology at Local Scale. Downscaling with CIELO model

    Science.gov (United States)

    Azevedo, Eduardo; Reis, Francisco; Tomé, Ricardo; Rodrigues, Conceição

    2016-04-01

    Islands with horizontal scales of the order of tens of km, as is the case of the Atlantic Islands of Macaronesia, are subscale orographic features for Global Climate Models (GCMs) since the horizontal scales of these models are too coarse to give a detailed representation of the islands' topography. Even the Regional Climate Models (RCMs) reveals limitations when they are forced to reproduce the climate of small islands mainly by the way they flat and lowers the elevation of the islands, reducing the capacity of the model to reproduce important local mechanisms that lead to a very deep local climate differentiation. Important local thermodynamics mechanisms like Foehn effect, or the influence of topography on radiation balance, have a prominent role in the climatic spatial differentiation. Advective transport of air - and the consequent induced adiabatic cooling due to orography - lead to transformations of the state parameters of the air that leads to the spatial configuration of the fields of pressure, temperature and humidity. The same mechanism is in the origin of the orographic clouds cover that, besides the direct role as water source by the reinforcement of precipitation, act like a filter to direct solar radiation and as a source of long-wave radiation that affect the local balance of energy. Also, the saturation (or near saturation) conditions that they provide constitute a barrier to water vapour diffusion in the mechanisms of evapotranspiration. Topographic factors like slope, aspect and orographic mask have also significant importance in the local energy balance. Therefore, the simulation of the local scale climate (past, present and future) in these archipelagos requires the use of downscaling techniques to adjust locally outputs obtained at upper scales. This presentation will discuss and analyse the evolution of the CIELO model (acronym for Clima Insular à Escala LOcal) a statistical/dynamical technique developed at the University of the Azores

  4. Two-scale Modelling of material degradation and failure

    Science.gov (United States)

    Aliabadi, Ferri M. H.

    2016-08-01

    It is widely recognized that macroscopic material properties depend on the features of the microstructure. The understanding of the links between microscopic and macroscopic material properties, main topic of Micromechanics, is of relevant technological interest, as it may enable deep understanding of the mechanisms governing materials degradation and failure. Polycrystalline materials are used in many engineering applications. Their microstructure is determined by distribution, size, morphology, anisotropy and orientation of the crystals [1]. At temperature below 0.3-0.5 Tmelting there are no ductile or creep mechanisms and two are the main failure patterns: intergranular, where the damage follows the grain boundaries and transgranular where instead the damage goes through the grain by splitting it into two parts. In this talk a two-scale approach to degradation and failure in polycrystalline materials will be presented. The formulation involves the engineering component level (macro-scale) and the material grain level (micro-scale). The macro-continuum is modelled using two- and three-dimensional boundary element formulation in which the presence of damage is formulated through an initial stress approach to account for the local softening in the neighborhood of points experiencing degradation at the micro-scale. The microscopic degradation is explicitly modelled by associating Representative Volume Elements (RVEs) to relevant points of the macro continuum, for representing the polycrystalline microstructure in the neighbourhood of the selected points. A grainboundary formulation is used to simulate intergranular/transgranular degradation and failure in the microstructure, whose morphology is generated using the Voronoi tessellations. Intergranular/transgranular degradation and failure are modeled through cohesive and frictional contact laws. To couple the two scales, macro-strains are transferred to the RVEs as periodic boundary conditions, while overall macro

  5. Orbital and millennial-scale features of atmospheric CH4 over the past 800,000 years

    DEFF Research Database (Denmark)

    Loulergue, Laetitia; Schilt, Adrian; Spahni, Renato

    2008-01-01

    Atmospheric methane is an important greenhouse gas and a sensitive indicator of climate change and millennial-scale temperature variability. Its concentrations over the past 650,000 years have varied between approximately 350 and approximately 800 parts per 10(9) by volume (p.p.b.v.) during glacial...... a detailed atmospheric methane record from the EPICA Dome C ice core that extends the history of this greenhouse gas to 800,000 yr before present. The average time resolution of the new data is approximately 380 yr and permits the identification of orbital and millennial-scale features. Spectral analyses...... of the northern ice sheet allowed higher methane emissions from extending periglacial wetlands. Millennial-scale changes in methane levels identified in our record as being associated with Antarctic isotope maxima events are indicative of ubiquitous millennial-scale temperature variability during the past eight...

  6. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  7. Modeling of micro-scale thermoacoustics

    Science.gov (United States)

    Offner, Avshalom; Ramon, Guy Z.

    2016-05-01

    Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the "stack"-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.

  8. Modeling of micro-scale thermoacoustics

    Energy Technology Data Exchange (ETDEWEB)

    Offner, Avshalom [The Nancy and Stephen Grand Technion Energy Program, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Ramon, Guy Z., E-mail: ramong@technion.ac.il [Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel)

    2016-05-02

    Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.

  9. Modeling cancer metabolism on a genome scale

    Science.gov (United States)

    Yizhak, Keren; Chaneton, Barbara; Gottlieb, Eyal; Ruppin, Eytan

    2015-01-01

    Cancer cells have fundamentally altered cellular metabolism that is associated with their tumorigenicity and malignancy. In addition to the widely studied Warburg effect, several new key metabolic alterations in cancer have been established over the last decade, leading to the recognition that altered tumor metabolism is one of the hallmarks of cancer. Deciphering the full scope and functional implications of the dysregulated metabolism in cancer requires both the advancement of a variety of omics measurements and the advancement of computational approaches for the analysis and contextualization of the accumulated data. Encouragingly, while the metabolic network is highly interconnected and complex, it is at the same time probably the best characterized cellular network. Following, this review discusses the challenges that genome-scale modeling of cancer metabolism has been facing. We survey several recent studies demonstrating the first strides that have been done, testifying to the value of this approach in portraying a network-level view of the cancer metabolism and in identifying novel drug targets and biomarkers. Finally, we outline a few new steps that may further advance this field. PMID:26130389

  10. Site-scale groundwater flow modelling of Ceberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1999-06-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracturezones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of {epsilon}{sub f} 10{sup -4} and a flow-wetted surface area of a{sub r} = 0.1 m{sup 2}/(m{sup 3} rock): The median travel time is 1720 years. The median canister flux is 3.27x10{sup -5} m/year. The median F-ratio is 1.72x10{sup 6} years/m. The base case and the deterministic variant suggest that the variability of the travel times within

  11. Gluon Saturation Model with Geometric Scaling for Net-Baryon Distributions in Relativistic Heavy Ion Collisions

    Institute of Scientific and Technical Information of China (English)

    李双; 冯笙琴

    2012-01-01

    The net-baryon number is essentially transported by valence quarks that probe the saturation regime in the target by multiple scattering. The net-baryon distributions, nuclear stopping power and gluon saturation features in the SPS and RHIC energy regions are investigated by taking advantage of the gluon saturation model with geometric scaling. Predications are made for the net-baryon rapidity distributions, mean rapidity loss and gluon saturation features in central Pb + Pb collisions at LHC.

  12. The Harris-Todaro model and economies of scale.

    Science.gov (United States)

    Panagariya, A; Succar, P

    1986-04-01

    The authors attempt to reanalyze the Harris-Todaro migration model in the presence of economies of scale in the manufacturing sector, focusing on economies of scale that are external to a given firm but internal to the industry.

  13. Efficient Feature-Driven Visualization of Large-Scale Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Aidong

    2012-12-12

    Very large, complex scientific data acquired in many research areas creates critical challenges for scientists to understand, analyze, and organize their data. The objective of this project is to expand the feature extraction and analysis capabilities to develop powerful and accurate visualization tools that can assist domain scientists with their requirements in multiple phases of scientific discovery. We have recently developed several feature-driven visualization methods for extracting different data characteristics of volumetric datasets. Our results verify the hypothesis in the proposal and will be used to develop additional prototype systems.

  14. Representation of Block-Based Image Features in a Multi-Scale Framework for Built-Up Area Detection

    Directory of Open Access Journals (Sweden)

    Zhongwen Hu

    2016-02-01

    Full Text Available The accurate extraction and mapping of built-up areas play an important role in many social, economic, and environmental studies. In this paper, we propose a novel approach for built-up area detection from high spatial resolution remote sensing images, using a block-based multi-scale feature representation framework. First, an image is divided into small blocks, in which the spectral, textural, and structural features are extracted and represented using a multi-scale framework; a set of refined Harris corner points is then used to select blocks as training samples; finally, a built-up index image is obtained by minimizing the normalized spectral, textural, and structural distances to the training samples, and a built-up area map is obtained by thresholding the index image. Experiments confirm that the proposed approach is effective for high-resolution optical and synthetic aperture radar images, with different scenes and different spatial resolutions.

  15. Multiscale Feature Model for Terrain Data Based on Adaptive Spatial Neighborhood

    Directory of Open Access Journals (Sweden)

    Huijie Zhang

    2013-01-01

    Full Text Available Multiresolution hierarchy based on features (FMRH has been applied in the field of terrain modeling and obtained significant results in real engineering. However, it is difficult to schedule multiresolution data in FMRH from external memory. This paper proposed new multiscale feature model and related strategies to cluster spatial data blocks and solve the scheduling problems of FMRH using spatial neighborhood. In the model, the nodes with similar error in the different layers should be in one cluster. On this basis, a space index algorithm for each cluster guided by Hilbert curve is proposed. It ensures that multi-resolution terrain data can be loaded without traversing the whole FMRH; therefore, the efficiency of data scheduling is improved. Moreover, a spatial closeness theorem of cluster is put forward and is also proved. It guarantees that the union of data blocks composites a whole terrain without any data loss. Finally, experiments have been carried out on many different large scale data sets, and the results demonstrate that the schedule time is shortened and the efficiency of I/O operation is apparently improved, which is important in real engineering.

  16. Multi-scale modeling of softening materials

    NARCIS (Netherlands)

    Lloberas Valls, O.; Simone, A.; Sluys, L.J.

    2008-01-01

    This paper presents an assessment of a two-scale framework for the study of softening materials. The procedure is based on a hierarchical Finite Element (FE) scheme in which computations are performed both at macro and mesoscopic scale levels. The methodology is chosen specifically to remain valid

  17. Research on the Price Features of Oil Stochastic Model Based on the Continuous Jump Model

    Directory of Open Access Journals (Sweden)

    Hou Mengmeng

    2017-01-01

    Full Text Available Aiming at calculating the price changes under the price features of oil stochastic model, the continuous jump model is proposed in this paper for data processing. The procedure is flexible, may be used with market prices of any oil contingent claim with closed form pricing solution, and easily deals with missing data problems. The results show that the accuracy can thus be improved overall the proposed system substantially.

  18. Upscaling a catchment-scale ecohydrology model for regional-scale earth system modeling

    Science.gov (United States)

    Adam, J. C.; Tague, C.; Liu, M.; Garcia, E.; Choate, J.; Mullis, T.; Hull, R.; Vaughan, J. K.; Kalyanaraman, A.; Nguyen, T.

    2014-12-01

    With a focus on the U.S. Pacific Northwest (PNW), BioEarth is an Earth System Model (EaSM) currently in development that explores the interactions between coupled C:N:H2O dynamics and resource management actions at the regional scale. Capturing coupled biogeochemical processes within EaSMs like BioEarth is important for exploring the response of the land surface to changes in climate and resource management actions; information that is important for shaping decisions that promote sustainable use of our natural resources. However, many EaSM frameworks do not adequately represent landscape-scale ( 10 km) are necessitated by computational limitations. Spatial heterogeneity in a landscape arises due to spatial differences in underlying soil and vegetation properties that control moisture, energy and nutrient fluxes; as well as differences that arise due to spatially-organized connections that may drive an ecohydrologic response by the land surface. While many land surface models used in EaSM frameworks capture the first type of heterogeneity, few account for the influence of lateral connectivity on land surface processes. This type of connectivity can be important when considering soil moisture and nutrient redistribution. The RHESSys model is utilized by BioEarth to enable a "bottom-up" approach that preserves fine spatial-scale sensitivities and lateral connectivity that may be important for coupled C:N:H2O dynamics over larger scales. RHESSys is a distributed eco-hydrologic model that was originally developed to run at relatively fine but computationally intensive spatial resolutions over small catchments. The objective of this presentation is to describe two developments to enable implementation of RHESSys over the PNW. 1) RHESSys is being adapted for BioEarth to allow for moderately coarser resolutions and the flexibility to capture both types of heterogeneity at biome-specific spatial scales. 2) A Kepler workflow is utilized to enable RHESSys implementation over

  19. Operational, regional-scale, chemical weather forecasting models in Europe

    NARCIS (Netherlands)

    Kukkonen, J.; Balk, T.; Schultz, D.M.; Baklanov, A.; Klein, T.; Miranda, A.I.; Monteiro, A.; Hirtl, M.; Tarvainen, V.; Boy, M.; Peuch, V.H.; Poupkou, A.; Kioutsioukis, I.; Finardi, S.; Sofiev, M.; Sokhi, R.; Lehtinen, K.; Karatzas, K.; San José, R.; Astitha, M.; Kallos, G.; Schaap, M.; Reimer, E.; Jakobs, H.; Eben, K.

    2011-01-01

    Numerical models that combine weather forecasting and atmospheric chemistry are here referred to as chemical weather forecasting models. Eighteen operational chemical weather forecasting models on regional and continental scales in Europe are described and compared in this article. Topics discussed

  20. Predicting protein-protein interactions from primary protein sequences using a novel multi-scale local feature representation scheme and the random forest.

    Directory of Open Access Journals (Sweden)

    Zhu-Hong You

    Full Text Available The study of protein-protein interactions (PPIs can be very important for the understanding of biological cellular functions. However, detecting PPIs in the laboratories are both time-consuming and expensive. For this reason, there has been much recent effort to develop techniques for computational prediction of PPIs as this can complement laboratory procedures and provide an inexpensive way of predicting the most likely set of interactions at the entire proteome scale. Although much progress has already been achieved in this direction, the problem is still far from being solved. More effective approaches are still required to overcome the limitations of the current ones. In this study, a novel Multi-scale Local Descriptor (MLD feature representation scheme is proposed to extract features from a protein sequence. This scheme can capture multi-scale local information by varying the length of protein-sequence segments. Based on the MLD, an ensemble learning method, the Random Forest (RF method, is used as classifier. The MLD feature representation scheme facilitates the mining of interaction information from multi-scale continuous amino acid segments, making it easier to capture multiple overlapping continuous binding patterns within a protein sequence. When the proposed method is tested with the PPI data of Saccharomyces cerevisiae, it achieves a prediction accuracy of 94.72% with 94.34% sensitivity at the precision of 98.91%. Extensive experiments are performed to compare our method with existing sequence-based method. Experimental results show that the performance of our predictor is better than several other state-of-the-art predictors also with the H. pylori dataset. The reason why such good results are achieved can largely be credited to the learning capabilities of the RF model and the novel MLD feature representation scheme. The experiment results show that the proposed approach can be very promising for predicting PPIs and can be a useful

  1. Feature and Model Selection in Feedforward Neural Networks

    Science.gov (United States)

    1994-06-01

    smaller than those experienced with the derivative-based saliencies. However, a minimal number of nodes were used to analyze the FLUIR problem, these...A4m. 101 Table 15. FLUIR Problem: Saliency Metric Loadings after Varimax Rotation Features Saliency Metrics 1 2 3 4 5 6 7181 1.__ _ 1_1 1 2 1 1 1 1 1

  2. Special welding features in the manufacture of large-scale heat exchangers for the petrochemical industry

    Energy Technology Data Exchange (ETDEWEB)

    Braeutigam, M.; Huppertz, P.H.

    1986-01-01

    It is the object of the present paper to describe the manufacture of very large heat exchangers developed and constructed in order to minimize energy losses. A few special welding features have been dealt with in detail. Constructional details, such as the choice of materials and the anti-vibration precautions, bring the article to a close.

  3. Comparative assessment of continuum-scale models of bimolecular reactive transport in porous media under pre-asymptotic conditions

    Science.gov (United States)

    Porta, G. M.; Ceriotti, G.; Thovert, J.-F.

    2016-02-01

    We compare the ability of various continuum-scale models to reproduce the key features of a transport setting associated with a bimolecular reaction taking place in the fluid phase and numerically simulated at the pore-scale level in a disordered porous medium. We start by considering a continuum-scale formulation which results from formal upscaling of this reactive transport process by means of volume averaging. The resulting (upscaled) continuum-scale system of equations includes nonlocal integro-differential terms and the effective parameters embedded in the model are quantified directly through computed pore-scale fluid velocity and pore space geometry attributes. The results obtained through this predictive model formulation are then compared against those provided by available effective continuum models which require calibration through parameter estimation. Our analysis considers two models recently proposed in the literature which are designed to embed incomplete mixing arising from the presence of fast reactions under advection-dominated transport conditions. We show that best estimates of the parameters of these two models heavily depend on the type of data employed for model calibration. Our upscaled nonlocal formulation enables us to reproduce most of the critical features observed through pore-scale simulation without any model calibration. As such, our results clearly show that embedding into a continuum-scale model the information content associated with pore-scale geometrical features and fluid velocity yields improved interpretation of typically available continuum-scale transport observations.

  4. Modelling Feature Interaction Patterns in Nokia Mobile Phones using Coloured Petri Nets and Design/CPN

    DEFF Research Database (Denmark)

    Lorentsen, Louise; Tuovinen, Antti-Pekka; Xu, Jianli

    2002-01-01

    This paper describes the first results of a project on modelling of important feature interaction patterns of Nokia mobile phones using Coloured Petri Nets. A modern mobile phone supports many features: voice and data calls, text messaging, personal information management (phonebook and calendar....... In this paper, we look at the problem of feature interaction in the user interface of Nokia mobile phones. We present a categorization of feature interactions and describe our approach to the modelling of feature interactions using Coloured Petri Nets (CP-nets or CPN). The CPN model is extended...

  5. Pore-scale modeling of competitive adsorption in porous media.

    Science.gov (United States)

    Ryan, Emily M; Tartakovsky, Alexandre M; Amon, Cristina

    2011-03-01

    In this paper we present a smoothed particle hydrodynamics (SPH) pore-scale multicomponent reactive transport model with competitive adsorption. SPH is a Lagrangian, particle based modeling method which uses the particles as interpolation points to discretize and solve flow and transport equations. The theory and details of the SPH pore-scale model are presented along with a novel method for handling surface reactions, the continuum surface reaction (CSR) model. The numerical accuracy of the CSR model is validated with analytical and finite difference solutions, and the effects of spatial and temporal resolution on the accuracy of the model are also discussed. The pore-scale model is used to study competitive adsorption for different Damköhler and Peclet numbers in a binary system where a plume of species B is introduced into a system which initially contains species A. The pore-scale model results are compared with a Darcy-scale model to investigate the accuracy of a Darcy-scale reactive transport model for a wide range of Damköhler and Peclet numbers. The comparison shows that the Darcy model over estimates the mass fraction of aqueous and adsorbed species B and underestimates the mass fractions of species A. The Darcy-scale model also predicts faster transport of species A and B through the system than the pore-scale model. The overestimation of the advective velocity and the extent of reactions by the Darcy-scale model are due to incomplete pore-scale mixing. As the degree of the solute mixing decreases with increasing Peclet and Damköhler numbers, so does the accuracy of the Darcy-scale model. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...... that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what...... are the limitations of different types of mod - els? This paper will provide examples of models that have been published in the literature for use across bioreactor scales, including computational fluid dynamics (CFD) and population balance models. Furthermore, the importance of good modeling practice...

  7. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    Science.gov (United States)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  8. Feature analysis of the scale factor variation on a constant rate biased ring laser gyro

    Institute of Scientific and Technical Information of China (English)

    Shiqiao Qin; Zongsheng Huang; Xingshu Wang

    2007-01-01

    Scale factor of a constant rate biased ring laser gyro (RLG) is studied both theoretically and experimentally.By analyzing experimental data, we find that there are three main terms contributing to the scale factor deviation. One of them is independent of time, the second varies linearly with time and the third varies exponentially with time. Theoretical analyses show that the first term is caused by experimental setup,the second and the third are caused by un-uniform thermal expension and cavity loss variation of the RLG.

  9. Multi-scale peridynamic modeling of dynamic fracture in concrete

    Science.gov (United States)

    Lammi, Christopher J.; Zhou, Min

    2017-01-01

    Peridynamics simulations of the dynamic deformation and failure of high-performance concrete are performed at the meso-scale. A pressure-dependent, peridynamic plasticity model and failure criteria are used to capture pressure-sensitive granular flow and fracture. The meso-scale framework explicitly resolves reinforcing phases, pores, and intrinsic flaws. A novel scaling approach is formulated to inform the engineering-scale plasticity model parameters with meso-scale simulation results. The effects of composition, porosity, and fracture energy at the meso-scale on the engineering-scale impact resistance are assessed. The fracture process zone at the meso-scale is found to propagate along adjacent pores and reinforcing phases under tensile and shear loading conditions. The simulations show that tensile strength decreases and dissipation increases as the porosity in the concrete increases. The framework and modeling approach allow the delineation of trends that can be used to design more impact-resistant materials.

  10. A multi-scale approach to mass segmentation using active contour models

    Science.gov (United States)

    Yu, Hongwei; Li, Lihua; Xu, Weidong; Liu, Wei

    2010-03-01

    As an important step of mass classification, mass segmentation plays an important role in computer-aided diagnosis (CAD). In this paper, we propose a novel scheme for breast mass segmentation in mammograms, which is based on level set method and multi-scale analysis. Mammogram is firstly decomposed by Gaussian pyramid into a sequence of images from fine to coarse, the C-V model is then applied at the coarse scale, and the obtained rough contour is used as the initial contour for segmentation at the fine scale. A local active contour (LAC) model based on image local information is utilized to refine the rough contour locally at the fine scale. In addition, the feature of area and gray level extracted from coarse segmentation is used to set the parameters of LAC model automatically to improve the adaptivity of our method. The results show the higher accuracy and robustness of the proposed multi-scale segmentation method than the conventional ones.

  11. 基于STEP的特征模型及重构算法%STEP-Based Feature Model and Feature Reconstructed Arithmetic

    Institute of Scientific and Technical Information of China (English)

    刘乃若; 王金伦

    2003-01-01

    For CAX systems,the technology of Feature-based product data integration is one of hot points. STEP AP214 protocol provides a standard to resolve this problem. This paper discusses the relations of entities in STEP AP214. Especially,for the problem that the protocol doesn't obviously give those features,it puts forward methods on expression and operation of feature-oriented data model. It gives the feature model mapping between AP214 and feature-based CAD systems,which is a basal theory to design out a uniform feature model of CAD/CAPP/CAM.

  12. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  13. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  14. Time invariant scaling in discrete fragmentation models

    CERN Document Server

    Giraud, B G; Giraud, B G; Peschanski, R

    1994-01-01

    Linear rate equations are used to describe the cascading decay of an initial heavy cluster into fragments. We consider moments of arbitrary orders of the mass multiplicity spectrum and derive scaling properties pertaining to their time evolution. We suggest that the mass weighted multiplicity is a suitable observable for the discovery of scaling. Numerical tests validate such properties, even for moderate values of the initial mass (nuclei, percolation clusters, jets of particles etc.). Finite size effects can be simply parametrized.

  15. Large-scale oscillation of structure-related DNA sequence features in human chromosome 21

    Science.gov (United States)

    Li, Wentian; Miramontes, Pedro

    2006-08-01

    Human chromosome 21 is the only chromosome in the human genome that exhibits oscillation of the (G+C) content of a cycle length of hundreds kilobases (kb) ( 500kb near the right telomere). We aim at establishing the existence of a similar periodicity in structure-related sequence features in order to relate this (G+C)% oscillation to other biological phenomena. The following quantities are shown to oscillate with the same 500kb periodicity in human chromosome 21: binding energy calculated by two sets of dinucleotide-based thermodynamic parameters, AA/TT and AAA/TTT bi- and tri-nucleotide density, 5'-TA-3' dinucleotide density, and signal for 10- or 11-base periodicity of AA/TT or AAA/TTT. These intrinsic quantities are related to structural features of the double helix of DNA molecules, such as base-pair binding, untwisting or unwinding, stiffness, and a putative tendency for nucleosome formation.

  16. Feature-Enhanced, Model-Based Sparse Aperture Imaging

    Science.gov (United States)

    2008-03-01

    obtain a sharp estimate of the spatial spectrum that exhibits super-resolution. We propose to use the singular value decomposition ( SVD ) of the data...application in a variety of problems, including image reconstruction and restoration [5], wavelet denoising [6], feature selection in machine learning...on the singular value decomposition ( SVD ) to combine multiple samples and the use of second-order cone programming for optimization of the resulting

  17. Large Scale Near-Duplicate Celebrity Web Images Retrieval Using Visual and Textual Features

    Directory of Open Access Journals (Sweden)

    Fengcai Qiao

    2013-01-01

    Full Text Available Near-duplicate image retrieval is a classical research problem in computer vision toward many applications such as image annotation and content-based image retrieval. On the web, near-duplication is more prevalent in queries for celebrities and historical figures which are of particular interest to the end users. Existing methods such as bag-of-visual-words (BoVW solve this problem mainly by exploiting purely visual features. To overcome this limitation, this paper proposes a novel text-based data-driven reranking framework, which utilizes textual features and is combined with state-of-art BoVW schemes. Under this framework, the input of the retrieval procedure is still only a query image. To verify the proposed approach, a dataset of 2 million images of 1089 different celebrities together with their accompanying texts is constructed. In addition, we comprehensively analyze the different categories of near duplication observed in our constructed dataset. Experimental results on this dataset show that the proposed framework can achieve higher mean average precision (mAP with an improvement of 21% on average in comparison with the approaches based only on visual features, while does not notably prolong the retrieval time.

  18. Large scale near-duplicate celebrity web images retrieval using visual and textual features.

    Science.gov (United States)

    Qiao, Fengcai; Wang, Cheng; Zhang, Xin; Wang, Hui

    2013-01-01

    Near-duplicate image retrieval is a classical research problem in computer vision toward many applications such as image annotation and content-based image retrieval. On the web, near-duplication is more prevalent in queries for celebrities and historical figures which are of particular interest to the end users. Existing methods such as bag-of-visual-words (BoVW) solve this problem mainly by exploiting purely visual features. To overcome this limitation, this paper proposes a novel text-based data-driven reranking framework, which utilizes textual features and is combined with state-of-art BoVW schemes. Under this framework, the input of the retrieval procedure is still only a query image. To verify the proposed approach, a dataset of 2 million images of 1089 different celebrities together with their accompanying texts is constructed. In addition, we comprehensively analyze the different categories of near duplication observed in our constructed dataset. Experimental results on this dataset show that the proposed framework can achieve higher mean average precision (mAP) with an improvement of 21% on average in comparison with the approaches based only on visual features, while does not notably prolong the retrieval time.

  19. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    with sizes dependent on the content of the image, at the location of each triangle. In this paper, we will demonstrate that by rotation of the interest regions at the triangles it is possible in grey scale images to achieve a recognition precision comparable with that of MOPS. The test of the proposed method...

  20. Homogeneity analysis with k sets of variables: An alternating least squares method with optimal scaling features

    NARCIS (Netherlands)

    van der Burg, Eeke; de Leeuw, Jan; Verdegaal, R.

    1986-01-01

    Homogeneity analysis, or multiple correspondence analysis, is usually applied to k separate variables. In this paper, it is applied to sets of variables by using sums within sets. The resulting technique is referred to as OVERALS. It uses the notion of optimal scaling, with transformations that can

  1. Relation between B-mode Gray-scale Median and Clinical Features of Carotid Stenosis Vulnerability

    NARCIS (Netherlands)

    Kolkert, Joé L.; Meerwaldt, Robbert; Loonstra, Jan; Schenk, Miranda; Palen, van der Job; Dungen, van den Jan J.; Zeebregts, Clark J.

    2014-01-01

    Background Vulnerability of the carotid plaque might be useful as a predictor for ischemic stroke risk. The gray-scale median (GSM) of the carotid plaque at B-mode imaging has been described as an objective tool to quantify vulnerability. However, its use is disputed in the published literature. Thi

  2. Relation between B-mode Gray-scale Median and Clinical Features of Carotid Stenosis Vulnerability

    NARCIS (Netherlands)

    Kolkert, Joe L.; Meerwaldt, Robbert; Loonstra, Jan; Schenk, Miranda; van der Palen, Job; van den Dungen, Jan J.; Zeebregts, Clark J.

    2014-01-01

    Background: Vulnerability of the carotid plaque might be useful as a predictor for ischemic stroke risk. The gray-scale median (GSM) of the carotid plaque at B-mode imaging has been described as an objective tool to quantify vulnerability. However, its use is disputed in the published literature. Th

  3. Differential Effect of Features of Autism on IQs Reported Using Wechsler Scales

    Science.gov (United States)

    Carothers, Douglas E.; Taylor, Ronald L.

    2013-01-01

    Many children with autistic disorder, or autism, are described as having low intelligence quotients. These descriptions are partially based on use of various editions of the "Wechsler Intelligence Scale for Children" (WISC), the most widely used intelligence test for children with autism. An important question is whether task demands of…

  4. An Investigation of Feature Models for Music Genre Classification using the Support Vector Classifier

    DEFF Research Database (Denmark)

    Meng, Anders; Shawe-Taylor, John

    2005-01-01

    autoregressive model for modelling short time features. Furthermore, it was investigated how these models can be integrated over a segment of short time features into a kernel such that a support vector machine can be applied. Two kernels with this property were considered, the convolution kernel and product...

  5. Results of PMIP2 coupled simulations of the Mid-Holocene and Last Glacial Maximum – Part 1: experiments and large-scale features

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2007-06-01

    Full Text Available A set of coupled ocean-atmosphere simulations using state of the art climate models is now available for the Last Glacial Maximum and the Mid-Holocene through the second phase of the Paleoclimate Modeling Intercomparison Project (PMIP2. This study presents the large-scale features of the simulated climates and compares the new model results to those of the atmospheric models from the first phase of the PMIP, for which sea surface temperature was prescribed or computed using simple slab ocean formulations. We consider the large-scale features of the climate change, pointing out some of the major differences between the different sets of experiments. We show in particular that systematic differences between PMIP1 and PMIP2 simulations are due to the interactive ocean, such as the amplification of the African monsoon at the Mid-Holocene or the change in precipitation in mid-latitudes at the LGM. Also the PMIP2 simulations are in general in better agreement with data than PMIP1 simulations.

  6. Significant Features Found in Simulated Tropical Climates Using a Cloud Resolving Model

    Science.gov (United States)

    Shie, C.-L.; Tao, W.-K.; Simpson, J.; Sui, C.-H.

    2000-01-01

    Cloud resolving model (CRM) has widely been used in recent years for simulations involving studies of radiative-convective systems and their role in determining the tropical regional climate. The growing popularity of CRMs usage can be credited for their inclusion of crucial and realistic features such like explicit cloud-scale dynamics, sophisticated microphysical processes, and explicit radiative-convective interaction. For example, by using a two-dimensional cloud model with radiative-convective interaction process, found a QBO-like (quasibiennial oscillation) oscillation of mean zonal wind that affected the convective system. Accordingly, the model-generated rain band corresponding to convective activity propagated in the direction of the low-level zonal mean winds; however, the precipitation became "localized" (limited within a small portion of the domain) as zonal mean winds were removed. Two other CRM simulations by S94 and Grabowski et al. (1996, hereafter G96), respectively that produced distinctive quasi-equilibrium ("climate") states on both tropical water and energy, i.e., a cold/dry state in S94 and a warm/wet state in G96, have later been investigated by T99. They found that the pattern of the imposed large-scale horizontal wind and the magnitude of the imposed surface fluxes were the two crucial mechanisms in determining the tropical climate states. The warm/wet climate was found associated with prescribed strong surface winds, or with maintained strong vertical wind shears that well-organized convective systems prevailed. On the other hand, the cold/dry climate was produced due to imposed weak surface winds and weak wind shears throughout a vertically mixing process by convection. In this study, considered as a sequel of T99, the model simulations to be presented are generally similar to those of T99 (where a detailed model setup can be found), except for a more detailed discussion along with few more simulated experiments. There are twelve major

  7. Grade 12 Students' Conceptual Understanding and Mental Models of Galvanic Cells before and after Learning by Using Small-Scale Experiments in Conjunction with a Model Kit

    Science.gov (United States)

    Supasorn, Saksri

    2015-01-01

    This study aimed to develop the small-scale experiments involving electrochemistry and the galvanic cell model kit featuring the sub-microscopic level. The small-scale experiments in conjunction with the model kit were implemented based on the 5E inquiry learning approach to enhance students' conceptual understanding of electrochemistry. The…

  8. Grade 12 Students' Conceptual Understanding and Mental Models of Galvanic Cells before and after Learning by Using Small-Scale Experiments in Conjunction with a Model Kit

    Science.gov (United States)

    Supasorn, Saksri

    2015-01-01

    This study aimed to develop the small-scale experiments involving electrochemistry and the galvanic cell model kit featuring the sub-microscopic level. The small-scale experiments in conjunction with the model kit were implemented based on the 5E inquiry learning approach to enhance students' conceptual understanding of electrochemistry. The…

  9. Common scale features of the recent Greek and Serbian church chant traditions

    Directory of Open Access Journals (Sweden)

    Peno Vesna

    2008-01-01

    Full Text Available This paper is an attempt to show the similarity between the Serbian and Greek Post-Byzantine chanting traditions, especially those which relate to the scale organization of modes. Three teachers and reformers from Constantinople, Chrisantos, Gregorios and Chourmousios, established a fairly firm theoretical system for the first time during the long history of church chant. One of the main results of their reform, beside changes relating to neums, was the assignment of strict sizes to the intervals in the natural tonal system. There are three kinds of natural scales: diatonic, chromatic and encharmonic. They all have their place in the Greek Anastasimatarion chant book, whose first edition was prepared by Petar Peloponesios, and later edited by Ionnes Protopsaltes. The first, first plagal and forth plagal modes are diatonic in each of their melos, with very few exceptions; the second and second plagal are soft and hard chromatic, while the third and varis are encharmonic. It is important to note that the Greek chanter is very conscious of the scale foundation of the melody, so he begins to chant the apechima foremost, the intonation formula that comprehends all indisposed details to enter the adequate mode, i. e. melos. One mode could use one sort of scale for all groups of melodies - melos. However, in some modes there are different melos, whose scale organisation is not equal at all. That means that it is not proper to equate mode with scale, but rather to look for the specific scale's shape through the melodies that belong to the melos. The absence of formal Serbian church music theory and, especially, the very conservative way in which church melodies are learnt by ear and by heart, has caused significant gaps, which preclude an adequate approach to the essentional principals of Serbian chant. Over the years many Serbian chanters and musicians have noted down church melodies, especially those from the Octoechos, in F or in G, with the key

  10. Salient Features of the Harnischfeger-Wiley Model

    Science.gov (United States)

    Hallinan, Maureen T.

    1976-01-01

    Explicates the Harnischfeger-Wiley model and points out its properties, underlying assumptions, and location in the literature on achievement. It also describes and critiques an empirical test by Harnischfeger and Wiley of their model. (Author/IRT)

  11. CLASSIFICATION OF URBAN FEATURE FROM UNMANNED AERIAL VEHICLE IMAGES USING GASVM INTEGRATION AND MULTI-SCALE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    M. Modiri

    2015-12-01

    Full Text Available The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  12. Multi-scale modelling and simulation in systems biology.

    Science.gov (United States)

    Dada, Joseph O; Mendes, Pedro

    2011-02-01

    The aim of systems biology is to describe and understand biology at a global scale where biological functions are recognised as a result of complex mechanisms that happen at several scales, from the molecular to the ecosystem. Modelling and simulation are computational tools that are invaluable for description, prediction and understanding these mechanisms in a quantitative and integrative way. Therefore the study of biological functions is greatly aided by multi-scale methods that enable the coupling and simulation of models spanning several spatial and temporal scales. Various methods have been developed for solving multi-scale problems in many scientific disciplines, and are applicable to continuum based modelling techniques, in which the relationship between system properties is expressed with continuous mathematical equations or discrete modelling techniques that are based on individual units to model the heterogeneous microscopic elements such as individuals or cells. In this review, we survey these multi-scale methods and explore their application in systems biology.

  13. Gauge coupling unification in a classically scale invariant model

    Science.gov (United States)

    Haba, Naoyuki; Ishida, Hiroyuki; Takahashi, Ryo; Yamaguchi, Yuya

    2016-02-01

    There are a lot of works within a class of classically scale invariant model, which is motivated by solving the gauge hierarchy problem. In this context, the Higgs mass vanishes at the UV scale due to the classically scale invariance, and is generated via the Coleman-Weinberg mechanism. Since the mass generation should occur not so far from the electroweak scale, we extend the standard model only around the TeV scale. We construct a model which can achieve the gauge coupling unification at the UV scale. In the same way, the model can realize the vacuum stability, smallness of active neutrino masses, baryon asymmetry of the universe, and dark matter relic abundance. The model predicts the existence vector-like fermions charged under SU(3) C with masses lower than 1 TeV, and the SM singlet Majorana dark matter with mass lower than 2.6 TeV.

  14. Gauge coupling unification in a classically scale invariant model

    CERN Document Server

    Haba, Naoyuki; Takahashi, Ryo; Yamaguchi, Yuya

    2015-01-01

    There are a lot of works within a class of classically scale invariant model, which is motivated by solving the gauge hierarchy problem. In this context, the Higgs mass vanishes at the UV scale due to the classically scale invariance, and is generated via the Coleman-Weinberg mechanism. Since the mass generation should occur not so far from the electroweak scale, we extend the standard model only around the TeV scale. We construct a model which can achieve the gauge coupling unification at the UV scale. In the same way, the model can realize the vacuum stability, smallness of active neutrino masses, baryon asymmetry of the universe, and dark matter relic abundance. The model predicts the existence vector-like fermions charged under $SU(3)_C$ with masses lower than $1\\,{\\rm TeV}$, and the SM singlet Majorana dark matter with mass lower than $2.6\\,{\\rm TeV}$.

  15. Integrative modeling reveals the principles of multi-scale chromatin boundary formation in human nuclear organization.

    Science.gov (United States)

    Moore, Benjamin L; Aitken, Stuart; Semple, Colin A

    2015-05-27

    Interphase chromosomes adopt a hierarchical structure, and recent data have characterized their chromatin organization at very different scales, from sub-genic regions associated with DNA-binding proteins at the order of tens or hundreds of bases, through larger regions with active or repressed chromatin states, up to multi-megabase-scale domains associated with nuclear positioning, replication timing and other qualities. However, we have lacked detailed, quantitative models to understand the interactions between these different strata. Here we collate large collections of matched locus-level chromatin features and Hi-C interaction data, representing higher-order organization, across three human cell types. We use quantitative modeling approaches to assess whether locus-level features are sufficient to explain higher-order structure, and identify the most influential underlying features. We identify structurally variable domains between cell types and examine the underlying features to discover a general association with cell-type-specific enhancer activity. We also identify the most prominent features marking the boundaries of two types of higher-order domains at different scales: topologically associating domains and nuclear compartments. We find parallel enrichments of particular chromatin features for both types, including features associated with active promoters and the architectural proteins CTCF and YY1. We show that integrative modeling of large chromatin dataset collections using random forests can generate useful insights into chromosome structure. The models produced recapitulate known biological features of the cell types involved, allow exploration of the antecedents of higher-order structures and generate testable hypotheses for further experimental studies.

  16. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    Science.gov (United States)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on

  17. The Personality Assessment Inventory as a Proxy for the Psychopathy Checklist-Revised: Testing the Incremental Validity and Cross-Sample Robustness of the Antisocial Features Scale

    Science.gov (United States)

    Douglas, Kevin S.; Guy, Laura S.; Edens, John F.; Boer, Douglas P.; Hamilton, Jennine

    2007-01-01

    The Personality Assessment Inventory's (PAI's) ability to predict psychopathic personality features, as assessed by the Psychopathy Checklist-Revised (PCL-R), was examined. To investigate whether the PAI Antisocial Features (ANT) Scale and subscales possessed incremental validity beyond other theoretically relevant PAI scales, optimized regression…

  18. Multi-scale Modelling of the Ocean Beneath Ice Shelves

    Science.gov (United States)

    Candy, A. S.; Kimura, S.; Holland, P.; Kramer, S. C.; Piggott, M. D.; Jenkins, A.; Pain, C. C.

    2011-12-01

    Quantitative prediction of future sea-level is currently limited because we lack an understanding of how the mass balance of the Earth's great ice sheets respond to and influence the climate. Understanding the behaviour of the ocean beneath an ice shelf and its interaction with the sheet above presents a great scientific challenge. A solid ice cover, in many places kilometres thick, bars access to the water column, so that observational data can only be obtained by drilling holes through, or launching autonomous vehicles beneath, the ice. In the absence of a comprehensive observational database, numerical modelling can be a key tool to advancing our understanding of the sub-ice-shelf regime. While we have a reasonable understanding of the overall ocean circulation and basic sensitivities, there remain critical processes that are difficult or impossible to represent in current operational models. Resolving these features adequately within a domain that includes the entire ice shelf and continental shelf to the north can be difficult with a structured horizontal resolution. It is currently impossible to adequately represent the key grounding line region, where the water column thickness reduces to zero, with a structured vertical grid. In addition, fronts and pycnoclines, the ice front geometry, shelf basal irregularities and modelling surface pressure all prove difficult in current approaches. The Fluidity-ICOM model (Piggott et al. 2008, doi:10.1002/fld.1663) simulates non-hydrostatic dynamics on meshes that can be unstructured in all three dimensions and uses anisotropic adaptive resolution which optimises the mesh and calculation in response to evolving solution dynamics. These features give it the flexibility required to tackle the challenges outlined above and the opportunity to develop a model that can improve understanding of the physical processes occurring under ice shelves. The approaches taken to develop a multi-scale model of ice shelf ocean cavity

  19. Small-scale features in the Earth's magnetic field observed by Magsat.

    Science.gov (United States)

    Cain, J.C.; Schmitz, D.R.; Muth, L.

    1984-01-01

    A spherical harmonic expansion to degree and order 29 is derived using a selected magnetically quiet sample of Magsat data. Global maps representing the contribution due to terms of the expansion above n = 13 at 400 km altitude are compared with previously published residual anomaly maps and shown to be similar, even in polar regions. An expansion with such a high degree and order displays all but the sharpest features seen by the satellite and gives a more consistent picture of the high-order field structure at a constant altitude than do component maps derived independently. -Authors

  20. Adaptive object recognition model using incremental feature representation and hierarchical classification.

    Science.gov (United States)

    Jeong, Sungmoon; Lee, Minho

    2012-01-01

    This paper presents an adaptive object recognition model based on incremental feature representation and a hierarchical feature classifier that offers plasticity to accommodate additional input data and reduces the problem of forgetting previously learned information. The incremental feature representation method applies adaptive prototype generation with a cortex-like mechanism to conventional feature representation to enable an incremental reflection of various object characteristics, such as feature dimensions in the learning process. A feature classifier based on using a hierarchical generative model recognizes various objects with variant feature dimensions during the learning process. Experimental results show that the adaptive object recognition model successfully recognizes single and multiple-object classes with enhanced stability and flexibility.

  1. Topological Properties and Transition Features Generated by a New Hybrid Preferential Model

    Institute of Scientific and Technical Information of China (English)

    FANG Jin-Qing; LIANG Yong

    2005-01-01

    @@ A new hybrid preferential model (HPM) is proposed for generating both scale-free and small world properties.The topological transition features in the HPM from random preferential attachment to deterministic preferential attachment are investigated. It is found that the exponents γ of the power law are very sensitive to the hybrid ratio (d/r) of determination to random attachment, and γincreases as the ratio d/r increases. It is also found that there exists a threshold at d/r = 1/1, beyond which γ increases rapidly and can tend to infinity if there is no random preferential attachment (r = 0), which implies that the power law scaling disappears completely.Moreover, it is also found that when the ratio d/r increases, the average path length L is decreased, while the average clustering coefficient C is increased. Compared to the BA model and random graph, the new HPM has both the smallest L and the biggest C, which is consistent with most real-world growing networks.

  2. Features of Balance Model Development of Exclave Region

    Directory of Open Access Journals (Sweden)

    Timur Rustamovich Gareev

    2015-06-01

    Full Text Available In the article, the authors build a balance model for an exclave region. The aim of the work is to explore the unique properties of exclaves to evaluate the possibility of development of a more complex model for the economy of a region. Exclaves are strange phenomena in both theoretical and practical regional economy. There is lack of comparative models, so it is typically quite challenging to study exclaves. At the same time, exclaves produce better statistics, which gives more careful consideration of cross-regional economic flows. The authors discuss methodologies of model-based regional development forecasting. They analyze balance approach on a more general level of regional governance and individually, on the example of specific territories. Thus, they identify and explain the need to develop balance approach models fitted to the special needs of certain territories. By combining regional modeling for an exclave with traditional balance and simulation-based methods and event-based approach, they come up with a more detailed model for the economy of a region. Having taken one Russian exclave as an example, the authors have developed a simulation event-based long-term sustainability model. In the article, they provide the general characteristics of the model, describe its components, and simulation algorithm. The approach introduced in this article combines the traditional balance models and the peculiarities of an exclave region to develop a holistic regional economy model (with the Kaliningrad region serving as an example. It is important to underline that the resulting model helps to evaluate the degree of influence of preferential economic regimes (such as Free Customs Zone, for example on the economy of a region.

  3. Geometric Feature Extraction and Model Reconstruction Based on Scattered Data

    Institute of Scientific and Technical Information of China (English)

    胡鑫; 习俊通; 金烨

    2004-01-01

    A method of 3D model reconstruction based on scattered point data in reverse engineering is presented here. The topological relationship of scattered points was established firstly, then the data set was triangulated to reconstruct the mesh surface model. The curvatures of cloud data were calculated based on the mesh surface, and the point data were segmented by edge-based method; Every patch of data was fitted by quadric surface of freeform surface, and the type of quadric surface was decided by parameters automatically, at last the whole CAD model was created. An example of mouse model was employed to confirm the effect of the algorithm.

  4. Cognitive Scale-Free Networks as a Model for Intermittency in Human Natural Language

    Science.gov (United States)

    Allegrini, Paolo; Grigolini, Paolo; Palatella, Luigi

    We model certain features of human language complexity by means of advanced concepts borrowed from statistical mechanics. Using a time series approach, the diffusion entropy method (DE), we compute the complexity of an Italian corpus of newspapers and magazines. We find that the anomalous scaling index is compatible with a simple dynamical model, a random walk on a complex scale-free network, which is linguistically related to Saussurre's paradigms. The model yields the famous Zipf's law in terms of the generalized central limit theorem.

  5. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...

  6. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2016-01-01

    checking (BMC) and inductive reasoning, it is verified that the generated model instance satisfies the generated safety properties. Using this method, we are able to verify the safety properties for model instances corresponding to railway networks of industrial size. Experiments show that BMC is also...

  7. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  8. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  9. Multi-scale observation and cross-scale mechanistic modeling on terrestrial ecosystem carbon cycle

    Institute of Scientific and Technical Information of China (English)

    CAO; Mingkui; YU; Guirui; LIU; Jiyuan; LI; Kerang

    2005-01-01

    To predict global climate change and to implement the Kyoto Protocol for stabilizing atmospheric greenhouse gases concentrations require quantifying spatio-temporal variations in the terrestrial carbon sink accurately. During the past decade multi-scale ecological experiment and observation networks have been established using various new technologies (e.g. controlled environmental facilities, eddy covariance techniques and quantitative remote sensing), and have obtained a large amount of data about terrestrial ecosystem carbon cycle. However, uncertainties in the magnitude and spatio-temporal variations of the terrestrial carbon sink and in understanding the underlying mechanisms have not been reduced significantly. One of the major reasons is that the observations and experiments were conducted at individual scales independently, but it is the interactions of factors and processes at different scales that determine the dynamics of the terrestrial carbon sink. Since experiments and observations are always conducted at specific scales, to understand cross-scale interactions requires mechanistic analysis that is best to be achieved by mechanistic modeling. However, mechanistic ecosystem models are mainly based on data from single-scale experiments and observations and hence have no capacity to simulate mechanistic cross-scale interconnection and interactions of ecosystem processes. New-generation mechanistic ecosystem models based on new ecological theoretical framework are needed to quantify the mechanisms from micro-level fast eco-physiological responses to macro-level slow acclimation in the pattern and structure in disturbed ecosystems. Multi-scale data-model fusion is a recently emerging approach to assimilate multi-scale observational data into mechanistic, dynamic modeling, in which the structure and parameters of mechanistic models for simulating cross-scale interactions are optimized using multi-scale observational data. The models are validated and

  10. Radiative transfer modelling of parsec-scale dusty warped discs

    CERN Document Server

    Jud, H; Mould, J; Burtscher, L; Tristram, K R W

    2016-01-01

    Warped discs have been found on (sub-)parsec scale in some nearby Seyfert nuclei, identified by their maser emission. Using dust radiative transfer simulations we explore their observational signatures in the infrared in order to find out whether they can partly replace the molecular torus. Strong variations of the brightness distributions are found, depending on the orientation of the warp with respect to the line of sight. Whereas images at short wavelengths typically show a disc-like and a point source component, the warp itself only becomes visible at far-infrared wavelengths. A similar variety is visible in the shapes of the spectral energy distributions. Especially for close to edge-on views, the models show silicate feature strengths ranging from deep absorption to strong emission for variations of the lines of sight towards the warp. To test the applicability of our model, we use the case of the Circinus galaxy, where infrared interferometry has revealed a highly elongated emission component matching ...

  11. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  12. Generalization Technique for 2D+SCALE Dhe Data Model

    Science.gov (United States)

    Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.

  13. Genome-scale NAD(H/(+ availability patterns as a differentiating feature between Saccharomyces cerevisiae and Scheffersomyces stipitis in relation to fermentative metabolism.

    Directory of Open Access Journals (Sweden)

    Alejandro Acevedo

    Full Text Available Scheffersomyces stipitis is a yeast able to ferment pentoses to ethanol, unlike Saccharomyces cerevisiae, it does not present the so-called overflow phenomenon. Metabolic features characterizing the presence or not of this phenomenon have not been fully elucidated. This work proposes that genome-scale metabolic response to variations in NAD(H/(+ availability characterizes fermentative behavior in both yeasts. Thus, differentiating features in S. stipitis and S. cerevisiae were determined analyzing growth sensitivity response to changes in available reducing capacity in relation to ethanol production capacity and overall metabolic flux span. Using genome-scale constraint-based metabolic models, phenotypic phase planes and shadow price analyses, an excess of available reducing capacity for growth was found in S. cerevisiae at every metabolic phenotype where growth is limited by oxygen uptake, while in S. stipitis this was observed only for a subset of those phenotypes. Moreover, by using flux variability analysis, an increased metabolic flux span was found in S. cerevisiae at growth limited by oxygen uptake, while in S. stipitis flux span was invariant. Therefore, each yeast can be characterized by a significantly different metabolic response and flux span when growth is limited by oxygen uptake, both features suggesting a higher metabolic flexibility in S. cerevisiae. By applying an optimization-based approach on the genome-scale models, three single reaction deletions were found to generate in S. stipitis the reducing capacity availability pattern found in S. cerevisiae, two of them correspond to reactions involved in the overflow phenomenon. These results show a close relationship between the growth sensitivity response given by the metabolic network and fermentative behavior.

  14. Genome-Scale NAD(H/+) Availability Patterns as a Differentiating Feature between Saccharomyces cerevisiae and Scheffersomyces stipitis in Relation to Fermentative Metabolism

    Science.gov (United States)

    Acevedo, Alejandro; Aroca, German; Conejeros, Raul

    2014-01-01

    Scheffersomyces stipitis is a yeast able to ferment pentoses to ethanol, unlike Saccharomyces cerevisiae, it does not present the so-called overflow phenomenon. Metabolic features characterizing the presence or not of this phenomenon have not been fully elucidated. This work proposes that genome-scale metabolic response to variations in NAD(H/+) availability characterizes fermentative behavior in both yeasts. Thus, differentiating features in S. stipitis and S. cerevisiae were determined analyzing growth sensitivity response to changes in available reducing capacity in relation to ethanol production capacity and overall metabolic flux span. Using genome-scale constraint-based metabolic models, phenotypic phase planes and shadow price analyses, an excess of available reducing capacity for growth was found in S. cerevisiae at every metabolic phenotype where growth is limited by oxygen uptake, while in S. stipitis this was observed only for a subset of those phenotypes. Moreover, by using flux variability analysis, an increased metabolic flux span was found in S. cerevisiae at growth limited by oxygen uptake, while in S. stipitis flux span was invariant. Therefore, each yeast can be characterized by a significantly different metabolic response and flux span when growth is limited by oxygen uptake, both features suggesting a higher metabolic flexibility in S. cerevisiae. By applying an optimization-based approach on the genome-scale models, three single reaction deletions were found to generate in S. stipitis the reducing capacity availability pattern found in S. cerevisiae, two of them correspond to reactions involved in the overflow phenomenon. These results show a close relationship between the growth sensitivity response given by the metabolic network and fermentative behavior. PMID:24489927

  15. Integrating coarse-scale uncertain soil moisture data into a fine-scale hydrological modelling scenario

    Directory of Open Access Journals (Sweden)

    H. Vernieuwe

    2011-06-01

    Full Text Available In a hydrological modelling scenario, often the modeller is confronted with external data, such as remotely-sensed soil moisture observations, that become available to update the model output. However, the scale triplet (spacing, extent and support of these data is often inconsistent with that of the model. Furthermore, the external data can be cursed with epistemic uncertainty. Hence, a method is needed that not only integrates the external data into the model, but that also takes into account the difference in scale and the uncertainty of the observations. In this paper, a synthetic hydrological modelling scenario is set up in which a high-resolution distributed hydrological model is run over an agricultural field. At regular time steps, coarse-scale field-averaged soil moisture data, described by means of possibility distributions (epistemic uncertainty, are retrieved by synthetic aperture radar and assimilated into the model. A method is presented that allows to integrate the coarse-scale possibility distribution of soil moisture content data with the fine-scale model-based soil moisture data. To this end, a scaling relationship between field-averaged soil moisture content data and its corresponding standard deviation is employed.

  16. Scaling of musculoskeletal models from static and dynamic trials

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    2015-01-01

    Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... parameters to a specific subject and compare the results to a simpler approach based on linear, segment-wise scaling. By incorporating data from functional and standing reference trials, the new scaling approaches reduce the model sensitivity to assumed model marker positions. For validation, we applied all...... three scaling methods to an inverse dynamics-based musculoskeletal model and compared predicted knee joint contact forces to those measured with an instrumented prosthesis during gait. Additionally, a Monte Carlo study was used to investigate the sensitivity of the knee joint contact force to random...

  17. Correlation between clinical and histological features in a pig model of choroidal neovascularization

    DEFF Research Database (Denmark)

    Lassota, Nathan; Kiilgaard, Jens Folke; Prause, Jan Ulrik;

    2006-01-01

    To analyse the histological changes in the retina and the choroid in a pig model of choroidal neovascularization (CNV) and to correlate these findings with fundus photographic and fluorescein angiographic features.......To analyse the histological changes in the retina and the choroid in a pig model of choroidal neovascularization (CNV) and to correlate these findings with fundus photographic and fluorescein angiographic features....

  18. Modeling aerosol processes at the local scale

    Energy Technology Data Exchange (ETDEWEB)

    Lazaridis, M.; Isukapalli, S.S.; Georgopoulos, P.G. [Environmental and Occupational Health Sciences Inst., NJ (United States)

    1998-12-31

    This work presents an approach for modeling photochemical gaseous and aerosol phase processes in subgrid plumes from major localized (e.g. point) sources (plume-in-grid modeling), thus improving the ability to quantify the relationship between emission source activity and ambient air quality. This approach employs the Reactive Plume Model (RPM-AERO) which extends the regulatory model RPM-IV by incorporating aerosol processes and heterogeneous chemistry. The physics and chemistry of elemental carbon, organic carbon, sulfate, sodium, chloride and crustal material of aerosols are treated and attributed to the PM size distribution. A modified version of the Carbon Bond IV chemical mechanism is included to model the formation of organic aerosol, and the inorganic multicomponent atmospheric aerosol equilibrium model, SEQUILIB is used for calculating the amounts of inorganic species in particulate matter. Aerosol dynamics modeled include mechanisms of nucleation, condensation and gas/particle partitioning of organic matter. An integrated trajectory-in-grid modeling system, UAM/RPM-AERO, is under continuing development for extracting boundary and initial conditions from the mesoscale photochemical/aerosol model UAM-AERO. The RPM-AERO is applied here to case studies involving emissions from point sources to study sulfate particle formation in plumes. Model calculations show that homogeneous nucleation is an efficient process for new particle formation in plumes, in agreement with previous field studies and theoretical predictions.

  19. Re-scaling social preference data: implications for modelling.

    Science.gov (United States)

    Cleemput, Irina; Kind, Paul; Kesteloot, Katrien

    2004-12-01

    As applied in cost-utility analysis, generic health status indexes require that full health and dead are valued as 1 and 0, respectively. When social preference weights for health states are obtained using a visual analogue scale (VAS), their raw scores often lie on a scale with different endpoints (such as "best" and "worst" health). Re-scaling individual raw scores to a 0-1 scale leads to the exclusion of respondents who fail to value dead or full health. This study examined alternative approaches that do not impose such strict exclusion criteria. The impact of a different timing of re-scaling (before or after aggregation) and a different measure of central tendency (median or mean) is measured. Data from a postal valuation survey (n=722) conducted in Belgium are used. The following models are considered: (a) re-scaling values for EQ-5D health states on a within-respondent basis and using mean re-scaled values as proxies for social preference values, (b) using median re-scaled values as proxies for social preference values, (c) computing the median raw VAS values and then re-scale, and (e) re-scaling mean raw VAS values. Exclusion rates, health state rankings and valuations and incremental value differences between pairs of states are computed for each model. Models that use a different timing of re-scaling, are compared ceteris paribus to evaluate the importance of timing of re-scaling and models that use a different measure of central tendency are compared ceteris paribus to evaluate the importance of the measure of central tendency. The exclusion rates are above 20% in the models that re-scale valuations before aggregation and less than 5% in the models that re-scale after aggregation. Health state valuations are found to be different in all two by two comparisons. Although in some comparisons the incremental values are statistically significantly different between models, they are never clinically significantly different. Differences in health state rankings

  20. Design models as emergent features: An empirical study in communication and shared mental models in instructional

    Directory of Open Access Journals (Sweden)

    Lucca Botturi

    2006-06-01

    Full Text Available This paper reports the results of an empirical study that investigated the instructional design process of three teams involved in the development of an e-learning unit. The teams declared they were using the same fast-prototyping design and development model, and were composed of the same roles (although with a different number of SMEs. Results indicate that the design and development model actually informs the activities of the group, but that it is interpreted and adapted by the team for the specific project. Thus, the actual practice model of each team can be regarded as an emergent feature. This analysis delivers insights concerning issues about team communication, shared understanding, individual perspectives and the implementation of prescriptive instructional design models.

  1. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Skerovic, V; Zarubica, V; Aleksic, M [Directorate of measures and precious metals, Optical radiation Metrology department, Mike Alasa 14, 11000 Belgrade (Serbia); Zekovic, L; Belca, I, E-mail: vladanskerovic@dmdm.r [Faculty of Physics, Department for Applied physics and metrology, Studentski trg 12-16, 11000 Belgrade (Serbia)

    2010-10-15

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  2. Mathematical Modelling of Silica Scaling Deposition in Geothermal Wells

    Science.gov (United States)

    Nizami, M.; Sutopo

    2016-09-01

    Silica scaling is widely encountered in geothermal wells in which produce two-phase geothermal fluid. Silica scaling could be formed due to chemical reacting by mixing a geothermal fluid with other geothermal fluid in different compositions, or also can be caused by changes in fluid properties due to changes pressure and temperature. One of method to overcome silica scaling which is occurred around geothermal well is by workover operation. Modelling of silica deposition in porous medium has been modeled in previously. However, the growth of silica scaling deposition in geothermal wells has never been modeled. Modelling of silica deposition through geothermal is important aspects to determine depth of silica scaling growth and best placing for workover device to clean silica scaling. This study is attempted to develop mathematical models for predicting silica scaling through geothermal wells. The mathematical model is developed by integrating the solubility-temperature correlation and two-phase pressure drop coupled wellbore fluid temperature correlation in a production well. The coupled model of two-phase pressure drop and wellbore fluid temperature correlation which is used in this paper is Hasan-Kabir correlation. This modelling is divided into two categories: single and two phase fluid model. Modelling of silica deposition is constrained in temperature distribution effect through geothermal wells by solubility correlation for silica. The results of this study are visualizing the growth of silica scaling thickness through geothermal wells in each segment of depth. Sensitivity analysis is applied in several parameters, such as: bottom-hole pressure, temperature, and silica concentrations. Temperature is most impact factor for silica scaling through geothermal wellbore and depth of flash point. In flash point, silica scaling thickness has reached maximum because reducing of mole in liquid portion.

  3. Combining sigma-lognormal modeling and classical features for analyzing graphomotor performances in kindergarten children.

    Science.gov (United States)

    Duval, Thérésa; Rémi, Céline; Plamondon, Réjean; Vaillant, Jean; O'Reilly, Christian

    2015-10-01

    This paper investigates the advantage of using the kinematic theory of rapid human movements as a complementary approach to those based on classical dynamical features to characterize and analyze kindergarten children's ability to engage in graphomotor activities as a preparation for handwriting learning. This study analyzes nine different movements taken from 48 children evenly distributed among three different school grades corresponding to pupils aged 3, 4, and 5 years. On the one hand, our results show that the ability to perform graphomotor activities depends on kindergarten grades. More importantly, this study shows which performance criteria, from sophisticated neuromotor modeling as well as more classical kinematic parameters, can differentiate children of different school grades. These criteria provide a valuable tool for studying children's graphomotor control learning strategies. On the other hand, from a practical point of view, it is observed that school grades do not clearly reflect pupils' graphomotor performances. This calls for a large-scale investigation, using a more efficient experimental design based on the various observations made throughout this study regarding the choice of the graphic shapes, the number of repetitions and the features to analyze. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Feature and duration of metre-scale sequences in a storm-dominated carbonate ramp setting (Kimmeridgian, northeastern Spain)

    Science.gov (United States)

    Colombié, C.; Bádenas, B.; Aurell, M.; Götz, A. E.; Bertholon, S.; Boussaha, M.

    2014-10-01

    Metre-scale sequences may result from the combined effects of allocyclic and autocyclic processes which are closely inter-related. The carbonate ramp that developed northwest of the Iberian Basin during the late Kimmeridgian was affected by northwestward migrating cyclones. Marl-limestone alternations that settled in mid-ramp environments contain abundant mm to cm thick coarse-grained accumulations that have been related to these events. The aim of this paper is to determine the impact of storm-induced processes on the metre-scale sequence features. Four sections (R3, R4, R6, and R7), which are 5 to 7 m in thickness, were studied bed-by-bed along a 4 km-long outcrop, which shows the transition between the shallow and the relatively deep realms of the middle ramp. Metre-scale sequences were defined and correlated along this outcrop according to the detailed microfacies analysis of host, fine-grained deposits, palynofacies and sequence-stratigraphic analyses, and carbon- and oxygen-isotope measurements. The evolution through time of sedimentary features such as the size of quartz grains and the relative abundance of grains other than quartz (i.e., muscovite, bivalve, ooid, and intraclast) does not correlate from one section to the other, suggesting that the finest as well as the coarsest sediment was reworked in these storm-dominated environments. Small- and medium-scale sequences are defined according to changes in alternation, marly interbed, and limestone bed thickness, and correlated from one section to the other. Because of the effects of storms on sediment distribution and preservation, sequence boundaries coincide with thin alternations and marly interbeds in the most proximal sections (i.e., R3, R4), while they correspond to thin alternations and limestone beds in the most distal sections (i.e., R6, R7). Field observations and palynofacies analyses confirm this sequence-stratigraphic analysis. The excursions in carbon- and oxygen-isotope values are consistent

  5. ScaleNet: a literature-based model of scale insect biology and systematics

    OpenAIRE

    García Morales, Mayrolin; Denno, Barbara D.; Miller, Douglass R.; Miller, Gary L.; Ben-Dov, Yair; Hardy, Nate B.

    2016-01-01

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found on all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis and plant-insect interactions. ScaleNet was launched in 1995 to provide insect identifiers, pest managers, insect systematists, evolutionary biologists and ecologists efficient access to information about scale insec...

  6. Structural and Molecular Modeling Features of P2X Receptors

    Directory of Open Access Journals (Sweden)

    Luiz Anastacio Alves

    2014-03-01

    Full Text Available Currently, adenosine 5'-triphosphate (ATP is recognized as the extracellular messenger that acts through P2 receptors. P2 receptors are divided into two subtypes: P2Y metabotropic receptors and P2X ionotropic receptors, both of which are found in virtually all mammalian cell types studied. Due to the difficulty in studying membrane protein structures by X-ray crystallography or NMR techniques, there is little information about these structures available in the literature. Two structures of the P2X4 receptor in truncated form have been solved by crystallography. Molecular modeling has proven to be an excellent tool for studying ionotropic receptors. Recently, modeling studies carried out on P2X receptors have advanced our knowledge of the P2X receptor structure-function relationships. This review presents a brief history of ion channel structural studies and shows how modeling approaches can be used to address relevant questions about P2X receptors.

  7. Systems Execution Modeling Technologies for Large-Scale Net-Centric Department of Defense Systems

    Science.gov (United States)

    2011-12-01

    represents an indivisible unit of functionality, such as an EJB or CORBA component. A configuration is a valid composition of Features that produces a...Component-based middleware, such as the Lightweight CORBA Component Model, are increasingly used to implement large-scale distributed, real-time and...development, packaging, and deployment frameworks for a wide range of component middleware. Although originally developed for the CORBA Component Model

  8. Modeling place field activity with hierarchical slow feature analysis

    Directory of Open Access Journals (Sweden)

    Fabian eSchoenfeld

    2015-05-01

    Full Text Available In this paper we present six experimental studies from the literature on hippocampal place cells and replicate their main results in a computational framework based on the principle of slowness. Each of the chosen studies first allows rodents to develop stable place field activity and then examines a distinct property of the established spatial encoding, namely adaptation to cue relocation and removal; directional firing activity in the linear track and open field; and results of morphing and stretching the overall environment. To replicate these studies we employ a hierarchical Slow Feature Analysis (SFA network. SFA is an unsupervised learning algorithm extracting slowly varying information from a given stream of data, and hierarchical application of SFA allows for high dimensional input such as visual images to be processed efficiently and in a biologically plausible fashion. Training data for the network is produced in ratlab, a free basic graphics engine designed to quickly set up a wide range of 3D environments mimicking real life experimental studies, simulate a foraging rodent while recording its visual input, and training & sampling a hierarchical SFA network.

  9. Strong scale dependent bispectrum in the Starobinsky model of inflation

    CERN Document Server

    Arroja, Frederico

    2012-01-01

    We compute analytically the dominant contribution to the tree-level bispectrum in the Starobinsky model of inflation. In this model, the potential is vacuum energy dominated but contains a subdominant linear term which changes the slope abruptly at a point. We show that on large scales compared with the transition scale $k_0$ and in the equilateral limit the analogue of the non-linearity parameter scales as $(k/k_0)^2$, that is its amplitude decays for larger and larger scales until it becomes subdominant with respect to the usual slow-roll suppressed corrections. On small scales we show that the non-linearity parameter oscillates with angular frequency given by $3/k_0$ and its amplitude grows linearly towards smaller scales and can be large depending on the model parameters. We also compare our results with previous results in the literature.

  10. Microarray-based large scale detection of single feature polymorphism in Gossypium hirsutum L.

    Indian Academy of Sciences (India)

    Anukool Srivastava; Samir V. Sawant; Satya Narayan Jena

    2015-12-01

    Microarrays offer an opportunity to explore the functional sequence polymorphism among different cultivars of many crop plants. The Affymetrix microarray expression data of five genotypes of Gossypium hirsutum L. at six different fibre developmental stages was used to identify single feature polymorphisms (SFPs). The background corrected and quantile-normalized log2 intensity values of all probes of triplicate data of each cotton variety were subjected to SFPs call by using SAM procedure in R language software. We detected a total of 37,473 SFPs among six pair genotype combinations of two superior (JKC777 and JKC725) and three inferior (JKC703, JKC737 and JKC783) using the expression data. The 224 SFPs covering 51 genes were randomly selected from the dataset of all six fibre developmental stages of JKC777 and JKC703 for validation by sequencing on a capillary sequencer. Of these 224 SFPs, 132 were found to be polymorphic and 92 monomorphic which indicate that the SFP prediction from the expression data in the present study confirmed a ∼ 58.92% of true SFPs. We further identified that most of the SFPs are associated with genes involved in fatty acid, flavonoid, auxin biosynthesis etc. indicating that these pathways significantly involved in fibre development.

  11. Cinlar Subgrid Scale Model for Large Eddy Simulation

    CERN Document Server

    Kara, Rukiye

    2016-01-01

    We construct a new subgrid scale (SGS) stress model for representing the small scale effects in large eddy simulation (LES) of incompressible flows. We use the covariance tensor for representing the Reynolds stress and include Clark's model for the cross stress. The Reynolds stress is obtained analytically from Cinlar random velocity field, which is based on vortex structures observed in the ocean at the subgrid scale. The validity of the model is tested with turbulent channel flow computed in OpenFOAM. It is compared with the most frequently used Smagorinsky and one-equation eddy SGS models through DNS data.

  12. Scale invariant cosmology II: model equations and properties

    CERN Document Server

    Maeder, Andre

    2016-01-01

    We want to establish the basic properties of a scale invariant cosmology, that also accounts for the hypothesis of scale invariance of the empty space at large scales. We write the basic analytical properties of the scale invariant cosmological models. The hypothesis of scale invariance of the empty space at large scale brings interesting simplifications in the scale invariant equations for cosmology. There is one new term, depending on the scale factor of the scale invariant cosmology, that opposes to gravity and favours an accelerated expansion. We first consider a zero-density model and find an accelerated expansion, going like t square. In models with matter present, the displacements due to the new term make a significant contribution Omega_l to the energy-density of the Universe, satisfying an equation of the form Omega_m + Omega_k + Omega_l = 1. Unlike the Friedman's models, there is a whole family of flat models (k=0) with different density parameters Omega_m smaller than 1. We examine the basic relat...

  13. Data on morphological features of mycosis induced by Colletotrichum nymphaeae and Lecanicillium longisporum on citrus orthezia scale

    Directory of Open Access Journals (Sweden)

    Gabriel Moura Mascarin

    2016-09-01

    Full Text Available We describe symptoms of mycosis induced by two native fungal entomopathogens of the citrus orthezia scale, Praelongorthezia praelonga (Hemiptera: Ortheziidae, an important pest of citrus orchards. The data presented in this article are related to the article entitled “Seasonal prevalence of the insect pathogenic fungus Colletotrichum nymphaeae in Brazilian citrus groves under different chemical pesticide regimes” [1]. The endemic fungal pathogen, C. nymphaeae, emerges through the thin cuticular intersegmental regions of the citrus orthezia scale body revealing orange salmon-pigmented conidiophores bearing conidial masses, as well as producing rhizoid-like hyphae that extend over the citrus leaf. By contrast, nymphs or adult females of this scale insect infected with Lecanicillium longisporum exhibit profuse outgrowth of bright white-pigmented conidiophores with clusters of conidia emerging from the insect intersegmental membranes, and mycosed cadavers are commonly observed attached to the leaf surface by hyphal extensions. These morphological differences are important features to discriminate these fungal entomopathogens in citrus orthezia scales.

  14. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  15. Collisional features in a model of a planetary ring

    NARCIS (Netherlands)

    Lawney, Brian; Jenkins, J.T; Burns, J.A.

    2012-01-01

    Images taken by the Cassini spacecraft display numerous “propellers”, telltale disturbances detected in Saturn’s outer A ring. In conventionally accepted models (Seiß, M., Spahn, F., Sremčević, M., Salo, H. [2005]. Geophys. Res. Lett. 32, L11205; Lewis, M., Stewart, G. [2009]. Icarus 199, 387–412),

  16. Molecular modeling of mechanosensory ion channel structural and functional features.

    Science.gov (United States)

    Gessmann, Renate; Kourtis, Nikos; Petratos, Kyriacos; Tavernarakis, Nektarios

    2010-09-16

    The DEG/ENaC (Degenerin/Epithelial Sodium Channel) protein family comprises related ion channel subunits from all metazoans, including humans. Members of this protein family play roles in several important biological processes such as transduction of mechanical stimuli, sodium re-absorption and blood pressure regulation. Several blocks of amino acid sequence are conserved in DEG/ENaC proteins, but structure/function relations in this channel class are poorly understood. Given the considerable experimental limitations associated with the crystallization of integral membrane proteins, knowledge-based modeling is often the only route towards obtaining reliable structural information. To gain insight into the structural characteristics of DEG/ENaC ion channels, we derived three-dimensional models of MEC-4 and UNC-8, based on the available crystal structures of ASIC1 (Acid Sensing Ion Channel 1). MEC-4 and UNC-8 are two DEG/ENaC family members involved in mechanosensation and proprioception respectively, in the nematode Caenorhabditis elegans. We used these models to examine the structural effects of specific mutations that alter channel function in vivo. The trimeric MEC-4 model provides insight into the mechanism by which gain-of-function mutations cause structural alterations that result in increased channel permeability, which trigger cell degeneration. Our analysis provides an introductory framework to further investigate the multimeric organization of the DEG/ENaC ion channel complex.

  17. Molecular modeling of mechanosensory ion channel structural and functional features.

    Directory of Open Access Journals (Sweden)

    Renate Gessmann

    Full Text Available The DEG/ENaC (Degenerin/Epithelial Sodium Channel protein family comprises related ion channel subunits from all metazoans, including humans. Members of this protein family play roles in several important biological processes such as transduction of mechanical stimuli, sodium re-absorption and blood pressure regulation. Several blocks of amino acid sequence are conserved in DEG/ENaC proteins, but structure/function relations in this channel class are poorly understood. Given the considerable experimental limitations associated with the crystallization of integral membrane proteins, knowledge-based modeling is often the only route towards obtaining reliable structural information. To gain insight into the structural characteristics of DEG/ENaC ion channels, we derived three-dimensional models of MEC-4 and UNC-8, based on the available crystal structures of ASIC1 (Acid Sensing Ion Channel 1. MEC-4 and UNC-8 are two DEG/ENaC family members involved in mechanosensation and proprioception respectively, in the nematode Caenorhabditis elegans. We used these models to examine the structural effects of specific mutations that alter channel function in vivo. The trimeric MEC-4 model provides insight into the mechanism by which gain-of-function mutations cause structural alterations that result in increased channel permeability, which trigger cell degeneration. Our analysis provides an introductory framework to further investigate the multimeric organization of the DEG/ENaC ion channel complex.

  18. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    Energy Technology Data Exchange (ETDEWEB)

    Krafft, S [The University of Texas MD Anderson Cancer Center, Houston, TX (United States); The University of Texas Graduate School of Biomedical Sciences, Houston, TX (United States); Briere, T; Court, L; Martel, M [The University of Texas MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  19. Evaluation of various feature extraction methods for landmine detection using hidden Markov models

    Science.gov (United States)

    Hamdi, Anis; Frigui, Hichem

    2012-06-01

    Hidden Markov Models (HMM) have proved to be eective for detecting buried land mines using data collected by a moving-vehicle-mounted ground penetrating radar (GPR). The general framework for a HMM-based landmine detector consists of building a HMM model for mine signatures and a HMM model for clutter signatures. A test alarm is assigned a condence proportional to the probability of that alarm being generated by the mine model and inversely proportional to its probability in the clutter model. The HMM models are built based on features extracted from GPR training signatures. These features are expected to capture the salient properties of the 3-dimensional alarms in a compact representation. The baseline HMM framework for landmine detection is based on gradient features. It models the time varying behavior of GPR signals, encoded using edge direction information, to compute the likelihood that a sequence of measurements is consistent with a buried landmine. In particular, the HMM mine models learns the hyperbolic shape associated with the signature of a buried mine by three states that correspond to the succession of an increasing edge, a at edge, and a decreasing edge. Recently, for the same application, other features have been used with dierent classiers. In particular, the Edge Histogram Descriptor (EHD) has been used within a K-nearest neighbor classier. Another descriptor is based on Gabor features and has been used within a discrete HMM classier. A third feature, that is closely related to the EHD, is the Bar histogram feature. This feature has been used within a Neural Networks classier for handwritten word recognition. In this paper, we propose an evaluation of the HMM based landmine detection framework with several feature extraction techniques. We adapt and evaluate the EHD, Gabor, Bar, and baseline gradient feature extraction methods. We compare the performance of these features using a large and diverse GPR data collection.

  20. A model of the medial superior olive explains spatiotemporal features of local field potentials.

    Science.gov (United States)

    Goldwyn, Joshua H; Mc Laughlin, Myles; Verschooten, Eric; Joris, Philip X; Rinzel, John

    2014-08-27

    Local field potentials are important indicators of in vivo neural activity. Sustained, phase-locked, sound-evoked extracellular fields in the mammalian auditory brainstem, known as the auditory neurophonic, reflect the activity of neurons in the medial superior olive (MSO). We develop a biophysically based model of the neurophonic that accounts for features of in vivo extracellular recordings in the cat auditory brainstem. By making plausible idealizations regarding the spatial symmetry of MSO neurons and the temporal synchrony of their afferent inputs, we reduce the challenging problem of computing extracellular potentials in a 3D volume conductor to a one-dimensional problem. We find that postsynaptic currents in bipolar MSO neuron models generate extracellular voltage responses that strikingly resemble in vivo recordings. Simulations reproduce distinctive spatiotemporal features of the in vivo neurophonic response to monaural pure tones: large oscillations (hundreds of microvolts to millivolts), broad spatial reach (millimeter scale), and a dipole-like spatial profile. We also explain how somatic inhibition and the relative timing of bilateral excitation may shape the spatial profile of the neurophonic. We observe in simulations, and find supporting evidence in in vivo data, that coincident excitatory inputs on both dendrites lead to a drastically reduced spatial reach of the neurophonic. This outcome surprises because coincident inputs are thought to evoke maximal firing rates in MSO neurons, and it reconciles previously puzzling evoked potential results in humans and animals. The success of our model, which has no axon or spike-generating sodium currents, suggests that MSO spikes do not contribute appreciably to the neurophonic.

  1. Analysis of linear trade models and relation to scale economies.

    Science.gov (United States)

    Gomory, R E; Baumol, W J

    1997-09-01

    We discuss linear Ricardo models with a range of parameters. We show that the exact boundary of the region of equilibria of these models is obtained by solving a simple integer programming problem. We show that there is also an exact correspondence between many of the equilibria resulting from families of linear models and the multiple equilibria of economies of scale models.

  2. Plant growth simulation for landscape scale hydrologic modeling

    Science.gov (United States)

    Landscape scale hydrologic models can be improved by incorporating realistic, process-oriented plant models for simulating crops, grasses, and woody species. The objective of this project was to present some approaches for plant modeling applicable to hydrologic models like SWAT that can affect the...

  3. Local phase tensor features for 3-D ultrasound to statistical shape+pose spine model registration.

    Science.gov (United States)

    Hacihaliloglu, Ilker; Rasoulian, Abtin; Rohling, Robert N; Abolmaesumi, Purang

    2014-11-01

    Most conventional spine interventions are performed under X-ray fluoroscopy guidance. In recent years, there has been a growing interest to develop nonionizing imaging alternatives to guide these procedures. Ultrasound guidance has emerged as a leading alternative. However, a challenging problem is automatic identification of the spinal anatomy in ultrasound data. In this paper, we propose a local phase-based bone feature enhancement technique that can robustly identify the spine surface in ultrasound images. The local phase information is obtained using a gradient energy tensor filter. This information is used to construct local phase tensors in ultrasound images, which highlight the spine surface. We show that our proposed approach results in a more distinct enhancement of the bone surfaces compared to recently proposed techniques based on monogenic scale-space filters and logarithmic Gabor filters. We also demonstrate that registration accuracy of a statistical shape+pose model of the spine to 3-D ultrasound images can be significantly improved, using the proposed method, compared to those obtained using monogenic scale-space filters and logarithmic Gabor filters.

  4. Stochastic modeling of unresolved scales in complex systems

    Institute of Scientific and Technical Information of China (English)

    Jinqiao DUAN

    2009-01-01

    Model uncertainties or simulation uncertainties occur in math-ematical modeling of multiscale complex systems, since some mechanisms or scales are not represented (i.e., 'unresolved') due to a lack in our understand-ing of these mechanisms or limitations in computational power. The impact of these unresolved scales on the resolved scales needs to be parameterized or taken into account. A stochastic scheme is devised to take the effects of unresolved scales into account, in the context of solving nonlinear partial differential equations. An example is presented to demonstrate this strategy.

  5. Crystalline structure of accretion disks: features of a global model.

    Science.gov (United States)

    Montani, Giovanni; Benini, Riccardo

    2011-08-01

    In this paper, we develop the analysis of a two-dimensional magnetohydrodynamical configuration for an axially symmetric and rotating plasma (embedded in a dipolelike magnetic field), modeling the structure of a thin accretion disk around a compact astrophysical object. Our study investigates the global profile of the disk plasma, in order to fix the conditions for the existence of a crystalline morphology and ring sequence, as outlined by the local analysis pursued in Coppi [Phys. Plasmas 12, 7302 (2005)] and Coppi and Rousseau [Astrophys. J. 641, 458 (2006)]. In the linear regime, when the electromagnetic back-reaction of the plasma is small enough, we show the existence of an oscillating radial behavior for the flux surface function, which very closely resembles the one outlined in the local model, apart from a radial modulation of the amplitude. In the opposite limit, corresponding to a dominant back-reaction in the magnetic structure over the field of central object, we can recognize the existence of a ringlike decomposition of the disk, according to the same modulation of the magnetic flux surface, and a smoother radial decay of the disk density, with respect to the linear case. In this extreme nonlinear regime, the global model seems to predict a configuration very close to that of the local analysis, but here the thermostatic pressure, crucial for the equilibrium setting, is also radially modulated. Among the conditions requested for the validity of such a global model, the confinement of the radial coordinate within a given value sensitive to the disk temperature and to the mass of the central objet, stands; however, this condition corresponds to dealing with a thin disk configuration.

  6. Using Feature Modelling and Automations to Select among Cloud Solutions

    OpenAIRE

    Quinton, Clément; Duchien, Laurence; Heymans, patrick; Mouton, Stéphane; Charlier, Etienne

    2012-01-01

    International audience; Cloud computing is a major trend in distributed computing environments. Resources are accessed on demand by customers and are delivered as services by cloud providers in a pay-per-use model. Companies provide their applications as services and rely on cloud providers to provision, host and manage such applications on top of their infrastructure. However, the wide range of cloud solutions and the lack of knowledge in this domain is a real problem for companies when faci...

  7. Feature learning for a hidden Markov model approach to landmine detection

    Science.gov (United States)

    Zhang, Xuping; Gader, Paul; Frigui, Hichem

    2007-04-01

    Hidden Markov Models (HMMs) are useful tools for landmine detection and discrimination using Ground Penetrating Radar (GPR). The performance of HMMs, as well as other feature-based methods, depends not only on the design of the classifier but on the features. Traditionally, algorithms for learning the parameters of classifiers have been intensely investigated while algorithms for learning parameters of the feature extraction process have been much less intensely investigated. In this paper, we describe experiments for learning feature extraction and classification parameters simultaneously in the context of using hidden Markov models for landmine detection.

  8. Analysis of chromosome aberration data by hybrid-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Indrawati, Iwiq [Research and Development on Radiation and Nuclear Biomedical Center, National Nuclear Energy Agency (Indonesia); Kumazawa, Shigeru [Nuclear Technology and Education Center, Japan Atomic Energy Research Institute, Honkomagome, Tokyo (Japan)

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  9. Modeling of the ground-to-SSFMB link networking features using SPW

    Science.gov (United States)

    Watson, John C.

    1993-01-01

    This report describes the modeling and simulation of the networking features of the ground-to-Space Station Freedom manned base (SSFMB) link using COMDISCO signal processing work-system (SPW). The networking features modeled include the implementation of Consultative Committee for Space Data Systems (CCSDS) protocols in the multiplexing of digitized audio and core data into virtual channel data units (VCDU's) in the control center complex and the demultiplexing of VCDU's in the onboard baseband signal processor. The emphasis of this work has been placed on techniques for modeling the CCSDS networking features using SPW. The objectives for developing the SPW models are to test the suitability of SPW for modeling networking features and to develop SPW simulation models of the control center complex and space station baseband signal processor for use in end-to-end testing of the ground-to-SSFMB S-band single access forward (SSAF) link.

  10. Crystalline Structure of Accretion Disks: Features of the Global Model

    CERN Document Server

    Montani, Giovanni

    2012-01-01

    In this paper, we develop the analysis of a two-dimensional magnetohydrodynamical configuration for an axially symmetric and rotating plasma (embedded in a dipole like magnetic field), modeling the structure of a thin accretion disk around a compact astrophysical object. Our study investigates the global profile of the disk plasma, in order to fix the conditions for the existence of a crystalline morphology and ring sequence, as outlined by the local analysis pursued in [1, 2]. In the linear regime, when the electromagnetic back-reaction of the plasma is small enough, we show the existence of an oscillating radial behavior for the flux surface function which very closely resembles the one outlined in the local model, apart from a radial modulation of the amplitude. In the opposite limit, corresponding to a dominant back-reaction in the magnetic structure over the field of central object, we can recognize the existence of a ring-like decomposition of the disk, according to the same modulation of the magnetic f...

  11. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.

    Science.gov (United States)

    Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S

    2017-10-01

    The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Phenomenology of dark energy: general features of large-scale perturbations

    CERN Document Server

    Perenon, Louis; Marinoni, Christian; Hui, Lam

    2015-01-01

    We present a systematic exploration of dark energy and modified gravity models containing a single scalar field non-minimally coupled to the metric. Even though the parameter space is large, by exploiting an effective field theory (EFT) formulation and by imposing simple physical constraints such as stability conditions and (sub-)luminal propagation of perturbations, we arrive at a number of generic predictions. (1) The linear growth rate of matter density fluctuations is generally suppressed compared to $\\Lambda$CDM at intermediate redshifts ($0.5 \\lesssim z \\lesssim 1$), despite the introduction of an attractive long-range scalar force. This is due to the fact that, in self-accelerating models, the background gravitational coupling weakens at intermediate redshifts, over-compensating the effect of the attractive scalar force. (2) At higher redshifts, the opposite happens; we identify a period of super-growth when the linear growth rate is larger than that predicted by $\\Lambda$CDM. (3) The gravitational sli...

  13. Observable Emission Features of Black Hole GRMHD Jets on Event Horizon Scales

    Science.gov (United States)

    Pu, Hung-Yi; Wu, Kinwah; Younsi, Ziri; Asada, Keiichi; Mizuno, Yosuke; Nakamura, Masanori

    2017-08-01

    The general-relativistic magnetohydrodynamical (GRMHD) formulation for black hole-powered jets naturally gives rise to a stagnation surface, where inflows and outflows along magnetic field lines that thread the black hole event horizon originate. We derive a conservative formulation for the transport of energetic electrons, which are initially injected at the stagnation surface and subsequently transported along flow streamlines. With this formulation the energy spectra evolution of the electrons along the flow in the presence of radiative and adiabatic cooling is determined. For flows regulated by synchrotron radiative losses and adiabatic cooling, the effective radio emission region is found to be finite, and geometrically it is more extended along the jet central axis. Moreover, the emission from regions adjacent to the stagnation surface is expected to be the most luminous as this is where the freshly injected energetic electrons are concentrated. An observable stagnation surface is thus a strong prediction of the GRMHD jet model with the prescribed non-thermal electron injection. Future millimeter/submillimeter (mm/sub-mm) very-long-baseline interferometric observations of supermassive black hole candidates, such as the one at the center of M87, can verify this GRMHD jet model and its associated non-thermal electron injection mechanism.

  14. Advances in large-scale crop modeling

    Science.gov (United States)

    Scholze, Marko; Bondeau, Alberte; Ewert, Frank; Kucharik, Chris; Priess, Jörg; Smith, Pascalle

    Intensified human activity and a growing population have changed the climate and the land biosphere. One of the most widely recognized human perturbations is the emission of carbon dioxide (C02) by fossil fuel burning and land-use change. As the terrestrial biosphere is an active player in the global carbon cycle, changes in land use feed back to the climate of the Earth through regulation of the content of atmospheric CO2, the most important greenhouse gas,and changing albedo (e.g., energy partitioning).Recently, the climate modeling community has started to develop more complex Earthsystem models that include marine and terrestrial biogeochemical processes in addition to the representation of atmospheric and oceanic circulation. However, most terrestrial biosphere models simulate only natural, or so-called potential, vegetation and do not account for managed ecosystems such as croplands and pastures, which make up nearly one-third of the Earth's land surface.

  15. [Unfolding item response model using best-worst scaling].

    Science.gov (United States)

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  16. Decay of isolated surface features driven by the Gibbs-Thomson effect in an analytic model and a simulation

    Energy Technology Data Exchange (ETDEWEB)

    McLean, J.G.; Krishnamachari, B.; Peale, D.R. [Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, New York 14853-2501 (United States); Chason, E. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States); Sethna, J.P.; Cooper, B.H. [Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, New York 14853-2501 (United States)

    1997-01-01

    A theory based on the thermodynamic Gibbs-Thomson relation is presented that provides the framework for understanding the time evolution of isolated nanoscale features (i.e., islands and pits) on surfaces. Two limiting cases are predicted, in which either diffusion or interface transfer is the limiting process. These cases correspond to similar regimes considered in previous works addressing the Ostwald ripening of ensembles of features. A third possible limiting case is noted for the special geometry of {open_quotes}stacked{close_quotes} islands. In these limiting cases, isolated features are predicted to decay in size with a power-law scaling in time: A{proportional_to}(t{sub 0}{minus}t){sup n}, where A is the area of the feature, t{sub 0} is the time at which the feature disappears, and n=2/3 or 1. The constant of proportionality is related to parameters describing both the kinetic and equilibrium properties of the surface. A continuous-time Monte Carlo simulation is used to test the application of this theory to generic surfaces with atomic scale features. A method is described to obtain macroscopic kinetic parameters describing interfaces in such simulations. Simulation and analytic theory are compared directly, using measurements of the simulation to determine the constants of the analytic theory. Agreement between the two is very good over a range of surface parameters, suggesting that the analytic theory properly captures the necessary physics. It is anticipated that the simulation will be useful in modeling complex surface geometries often seen in experiments on physical surfaces, for which application of the analytic model is not straightforward. {copyright} {ital 1997} {ital The American Physical Society}

  17. Decay of isolated surface features driven by the Gibbs-Thomson effect in an analytic model and a simulation

    Science.gov (United States)

    McLean, James G.; Krishnamachari, B.; Peale, D. R.; Chason, E.; Sethna, James P.; Cooper, B. H.

    1997-01-01

    A theory based on the thermodynamic Gibbs-Thomson relation is presented that provides the framework for understanding the time evolution of isolated nanoscale features (i.e., islands and pits) on surfaces. Two limiting cases are predicted, in which either diffusion or interface transfer is the limiting process. These cases correspond to similar regimes considered in previous works addressing the Ostwald ripening of ensembles of features. A third possible limiting case is noted for the special geometry of ``stacked'' islands. In these limiting cases, isolated features are predicted to decay in size with a power-law scaling in time: A~(t0-t)n, where A is the area of the feature, t0 is the time at which the feature disappears, and n=2/3 or 1. The constant of proportionality is related to parameters describing both the kinetic and equilibrium properties of the surface. A continuous-time Monte Carlo simulation is used to test the application of this theory to generic surfaces with atomic scale features. A method is described to obtain macroscopic kinetic parameters describing interfaces in such simulations. Simulation and analytic theory are compared directly, using measurements of the simulation to determine the constants of the analytic theory. Agreement between the two is very good over a range of surface parameters, suggesting that the analytic theory properly captures the necessary physics. It is anticipated that the simulation will be useful in modeling complex surface geometries often seen in experiments on physical surfaces, for which application of the analytic model is not straightforward.

  18. Products recognition on shop-racks from local scale-invariant features

    Science.gov (United States)

    Zawistowski, Jacek; Kurzejamski, Grzegorz; Garbat, Piotr; Naruniec, Jacek

    2016-04-01

    This paper presents a system designed for the multi-object detection purposes and adjusted for the application of product search on the market shelves. System uses well known binary keypoint detection algorithms for finding characteristic points in the image. One of the main idea is object recognition based on Implicit Shape Model method. Authors of the article proposed many improvements of the algorithm. Originally fiducial points are matched with a very simple function. This leads to the limitations in the number of objects parts being success- fully separated, while various methods of classification may be validated in order to achieve higher performance. Such an extension implies research on training procedure able to deal with many objects categories. Proposed solution opens a new possibilities for many algorithms demanding fast and robust multi-object recognition.

  19. Large and strong scale dependent bispectrum in single field inflation from a sharp feature in the mass

    CERN Document Server

    Arroja, Frederico; Sasaki, Misao

    2011-01-01

    We study an inflationary model driven by a single minimally coupled standard kinetic term scalar field with a step in its mass modeled by an Heaviside step function. We present an analytical approximation for the mode function of the curvature perturbation, obtain the power spectrum analytically and compare it with the numerical result. We show that, after the scale set by the step, the spectrum contains damped oscillations that are well described by our analytical approximation. We also compute the dominant contribution to the bispectrum in the equilateral and the squeezed limits and find new shapes. In the equilateral and squeezed limits the bispectrum oscillates and it has a linear growth envelope towards smaller scales. The bispectrum size can be large depending on the model parameters.

  20. Flavor Gauge Models Below the Fermi Scale

    Energy Technology Data Exchange (ETDEWEB)

    Babu, K. S. [Oklahoma State U.; Friedland, A. [SLAC; Machado, P. A.N. [Madrid, IFT; Mocioiu, I. [Penn State U.

    2017-05-04

    The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson, $X$, corresponding to the $B-L$ symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, $D^+$ and Upsilon decays, $D-\\bar{D}^0$ mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling $g_X$ in the range $(10^{-2} - 10^{-4})$ the model is shown to be consistent with the data. Possible ways of testing the model in $b$ physics, top and $Z$ decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. The proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.

  1. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin

    2015-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  2. Scaling behavior of the Heisenberg model in three dimensions.

    Science.gov (United States)

    Gordillo-Guerrero, A; Kenna, R; Ruiz-Lorenzo, J J

    2013-12-01

    We report on extensive numerical simulations of the three-dimensional Heisenberg model and its analysis through finite-size scaling of Lee-Yang zeros. Besides the critical regime, we also investigate scaling in the ferromagnetic phase. We show that, in this case of broken symmetry, the corrections to scaling contain information on the Goldstone modes. We present a comprehensive Lee-Yang analysis, including the density of zeros, and confirm recent numerical estimates for critical exponents.

  3. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguíluz, Víctor M.; Hernández-García, Emilio; Klemm, Konstantin

    2010-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  4. Multiscale modeling of soft matter: scaling of dynamics.

    Science.gov (United States)

    Fritz, Dominik; Koschke, Konstantin; Harmandaris, Vagelis A; van der Vegt, Nico F A; Kremer, Kurt

    2011-06-14

    Many physical phenomena and properties of soft matter systems are characterized by an interplay of interactions and processes that span a wide range of length- and time scales. Computer simulation approaches require models, which cover these scales. These are typically multiscale models that combine and link different levels of resolution. In order to reach mesoscopic time- and length scales, necessary to access material properties, coarse-grained models are developed. They are based on microscopic, atomistic descriptions of systems and represent these systems on a coarser, mesoscopic level. While the connection between length scales can be established immediately, the link between the different time scales that takes into account the faster dynamics of the coarser system cannot be obtained directly. In this perspective paper we discuss methods that link the time scales in structure based multiscale models. Concepts which try to rigorously map dynamics of related models are limited to simple model systems, while the challenge in soft matter systems is the multitude of fluctuating energy barriers of comparable height. More pragmatic methods to match time scales are applied successfully to quantitatively understand and predict dynamics of one-component soft matter systems. However, there are still open questions. We point out that the link between the dynamics on different resolution levels can be affected by slight changes of the system, as for different tacticities. Furthermore, in two-component systems the dynamics of the host polymer and of additives are accelerated very differently.

  5. Toward a model for lexical access based on acoustic landmarks and distinctive features

    Science.gov (United States)

    Stevens, Kenneth N.

    2002-04-01

    This article describes a model in which the acoustic speech signal is processed to yield a discrete representation of the speech stream in terms of a sequence of segments, each of which is described by a set (or bundle) of binary distinctive features. These distinctive features specify the phonemic contrasts that are used in the language, such that a change in the value of a feature can potentially generate a new word. This model is a part of a more general model that derives a word sequence from this feature representation, the words being represented in a lexicon by sequences of feature bundles. The processing of the signal proceeds in three steps: (1) Detection of peaks, valleys, and discontinuities in particular frequency ranges of the signal leads to identification of acoustic landmarks. The type of landmark provides evidence for a subset of distinctive features called articulator-free features (e.g., [vowel], [consonant], [continuant]). (2) Acoustic parameters are derived from the signal near the landmarks to provide evidence for the actions of particular articulators, and acoustic cues are extracted by sampling selected attributes of these parameters in these regions. The selection of cues that are extracted depends on the type of landmark and on the environment in which it occurs. (3) The cues obtained in step (2) are combined, taking context into account, to provide estimates of ``articulator-bound'' features associated with each landmark (e.g., [lips], [high], [nasal]). These articulator-bound features, combined with the articulator-free features in (1), constitute the sequence of feature bundles that forms the output of the model. Examples of cues that are used, and justification for this selection, are given, as well as examples of the process of inferring the underlying features for a segment when there is variability in the signal due to enhancement gestures (recruited by a speaker to make a contrast more salient) or due to overlap of gestures from

  6. Predicting breeding bird occurrence by stand- and microhabitat-scale features in even-aged stands in the Central Appalachians

    Science.gov (United States)

    McDermott, M.E.; Wood, P.B.; Miller, G.W.; Simpson, B.T.

    2011-01-01

    Spatial scale is an important consideration when managing forest wildlife habitat, and models can be used to improve our understanding of these habitats at relevant scales. Our objectives were to determine whether stand- or microhabitat-scale variables better predicted bird metrics (diversity, species presence, and abundance) and to examine breeding bird response to clearcut size and age in a highly forested landscape. In 2004-2007, vegetation data were collected from 62 even-aged stands that were 3.6-34.6. ha in size and harvested in 1963-1990 on the Monongahela National Forest, WV, USA. In 2005-2007, we also surveyed birds at vegetation plots. We used classification and regression trees to model breeding bird habitat use with a suite of stand and microhabitat variables. Among stand variables, elevation, stand age, and stand size were most commonly retained as important variables in guild and species models. Among microhabitat variables, medium-sized tree density and tree species diversity most commonly predicted bird presence or abundance. Early successional and generalist bird presence, abundance, and diversity were better predicted by microhabitat variables than stand variables. Thus, more intensive field sampling may be required to predict habitat use for these species, and management may be needed at a finer scale. Conversely, stand-level variables had greater utility in predicting late-successional species occurrence and abundance; thus management decisions and modeling at this scale may be suitable in areas with a uniform landscape, such as our study area. Our study suggests that late-successional breeding bird diversity can be maximized long-term by including harvests >10. ha in size into our study area and by increasing tree diversity. Some harvesting will need to be incorporated regularly, because after 15 years, the study stands did not provide habitat for most early successional breeding specialists. ?? 2010 Elsevier B.V.

  7. DISPLAY-2: a two-dimensional shallow layer model for dense gas dispersion including complex features.

    Science.gov (United States)

    Venetsanos, A G; Bartzis, J G; Würtz, J; Papailiou, D D

    2003-04-25

    A two-dimensional shallow layer model has been developed to predict dense gas dispersion, under realistic conditions, including complex features such as two-phase releases, obstacles and inclined ground. The model attempts to predict the time and space evolution of the cloud formed after a release of a two-phase pollutant into the atmosphere. The air-pollutant mixture is assumed ideal. The cloud evolution is described mathematically through the Cartesian, two-dimensional, shallow layer conservation equations for mixture mass, mixture momentum in two horizontal directions, total pollutant mass fraction (vapor and liquid) and mixture internal energy. Liquid mass fraction is obtained assuming phase equilibrium. Account is taken in the conservation equations for liquid slip and eventual liquid rainout through the ground. Entrainment of ambient air is modeled via an entrainment velocity model, which takes into account the effects of ground friction, ground heat transfer and relative motion between cloud and surrounding atmosphere. The model additionally accounts for thin obstacles effects in three ways. First a stepwise description of the obstacle is generated, following the grid cell faces, taking into account the corresponding area blockage. Then obstacle drag on the passing cloud is modeled by adding flow resistance terms in the momentum equations. Finally the effect of extra vorticity generation and entrainment enhancement behind obstacles is modeled by adding locally into the entrainment formula without obstacles, a characteristic velocity scale defined from the obstacle pressure drop and the local cloud height.The present model predictions have been compared against theoretical results for constant volume and constant flux gravity currents. It was found that deviations of the predicted cloud footprint area change with time from the theoretical were acceptably small, if one models the frictional forces between cloud and ambient air, neglecting the Richardson

  8. Gravitational wave background from Standard Model physics: qualitative features

    Energy Technology Data Exchange (ETDEWEB)

    Ghiglieri, J.; Laine, M. [Institute for Theoretical Physics, Albert Einstein Center, University of Bern,Sidlerstrasse 5, CH-3012 Bern (Switzerland)

    2015-07-16

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at T>160 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.

  9. Gravitational wave background from Standard Model physics: Qualitative features

    CERN Document Server

    Ghiglieri, J

    2015-01-01

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at T > 160 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future ge...

  10. Detection of baryon acoustic oscillation features in the large-scale three-point correlation function of SDSS BOSS DR12 CMASS galaxies

    Science.gov (United States)

    Slepian, Zachary; Eisenstein, Daniel J.; Brownstein, Joel R.; Chuang, Chia-Hsun; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; Percival, Will J.; Ross, Ashley J.; Rossi, Graziano; Seo, Hee-Jong; Slosar, Anže; Vargas-Magaña, Mariana

    2017-08-01

    We present the large-scale three-point correlation function (3PCF) of the Sloan Digital Sky Survey DR12 Constant stellar Mass (CMASS) sample of 777 202 Luminous Red Galaxies, the largest-ever sample used for a 3PCF or bispectrum measurement. We make the first high-significance (4.5σ) detection of baryon acoustic oscillations (BAO) in the 3PCF. Using these acoustic features in the 3PCF as a standard ruler, we measure the distance to z = 0.57 to 1.7 per cent precision (statistical plus systematic). We find DV = 2024 ± 29 Mpc (stat) ± 20 Mpc (sys) for our fiducial cosmology (consistent with Planck 2015) and bias model. This measurement extends the use of the BAO technique from the two-point correlation function (2PCF) and power spectrum to the 3PCF and opens an avenue for deriving additional cosmological distance information from future large-scale structure redshift surveys such as DESI. Our measured distance scale from the 3PCF is fairly independent from that derived from the pre-reconstruction 2PCF and is equivalent to increasing the length of BOSS by roughly 10 per cent; reconstruction appears to lower the independence of the distance measurements. Fitting a model including tidal tensor bias yields a moderate-significance (2.6σ) detection of this bias with a value in agreement with the prediction from local Lagrangian biasing.

  11. Impact of SLA assimilation in the Sicily Channel Regional Model: model skills and mesoscale features

    Directory of Open Access Journals (Sweden)

    A. Olita

    2012-07-01

    Full Text Available The impact of the assimilation of MyOcean sea level anomalies along-track data on the analyses of the Sicily Channel Regional Model was studied. The numerical model has a resolution of 1/32° degrees and is capable to reproduce mesoscale and sub-mesoscale features. The impact of the SLA assimilation is studied by comparing a simulation (SIM, which does not assimilate data with an analysis (AN assimilating SLA along-track multi-mission data produced in the framework of MyOcean project. The quality of the analysis was evaluated by computing RMSE of the misfits between analysis background and observations (sea level before assimilation. A qualitative evaluation of the ability of the analyses to reproduce mesoscale structures is accomplished by comparing model results with ocean colour and SST satellite data, able to detect such features on the ocean surface. CTD profiles allowed to evaluate the impact of the SLA assimilation along the water column. We found a significant improvement for AN solution in terms of SLA RMSE with respect to SIM (the averaged RMSE of AN SLA misfits over 2 years is about 0.5 cm smaller than SIM. Comparison with CTD data shows a questionable improvement produced by the assimilation process in terms of vertical features: AN is better in temperature while for salinity it gets worse than SIM at the surface. This suggests that a better a-priori description of the vertical error covariances would be desirable. The qualitative comparison of simulation and analyses with synoptic satellite independent data proves that SLA assimilation allows to correctly reproduce some dynamical features (above all the circulation in the Ionian portion of the domain and mesoscale structures otherwise misplaced or neglected by SIM. Such mesoscale changes also infer that the eddy momentum fluxes (i.e. Reynolds stresses show major changes in the Ionian area. Changes in Reynolds stresses reflect a different pumping of eastward momentum from the eddy to

  12. Atomic-scale modeling of cellulose nanocrystals

    Science.gov (United States)

    Wu, Xiawa

    Cellulose nanocrystals (CNCs), the most abundant nanomaterials in nature, are recognized as one of the most promising candidates to meet the growing demand of green, bio-degradable and sustainable nanomaterials for future applications. CNCs draw significant interest due to their high axial elasticity and low density-elasticity ratio, both of which are extensively researched over the years. In spite of the great potential of CNCs as functional nanoparticles for nanocomposite materials, a fundamental understanding of CNC properties and their role in composite property enhancement is not available. In this work, CNCs are studied using molecular dynamics simulation method to predict their material' behaviors in the nanoscale. (a) Mechanical properties include tensile deformation in the elastic and plastic regions using molecular mechanics, molecular dynamics and nanoindentation methods. This allows comparisons between the methods and closer connectivity to experimental measurement techniques. The elastic moduli in the axial and transverse directions are obtained and the results are found to be in good agreement with previous research. The ultimate properties in plastic deformation are reported for the first time and failure mechanism are analyzed in details. (b) The thermal expansion of CNC crystals and films are studied. It is proposed that CNC film thermal expansion is due primarily to single crystal expansion and CNC-CNC interfacial motion. The relative contributions of inter- and intra-crystal responses to heating are explored. (c) Friction at cellulose-CNCs and diamond-CNCs interfaces is studied. The effects of sliding velocity, normal load, and relative angle between sliding surfaces are predicted. The Cellulose-CNC model is analyzed in terms of hydrogen bonding effect, and the diamond-CNC model compliments some of the discussion of the previous model. In summary, CNC's material properties and molecular models are both studied in this research, contributing to

  13. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  14. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2003-06-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30–60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9 Tg(O3 from 273 to 299 Tg(O(3. Thus, there is a spread of ±35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  15. Combining EEG Microstates with fMRI Structural Features for Modeling Brain Activity.

    Science.gov (United States)

    Michalopoulos, Kostas; Bourbakis, Nikolaos

    2015-12-01

    Combining information from Electroencephalography (EEG) and Functional Magnetic Resonance Imaging (fMRI) has been a topic of increased interest recently. The main advantage of the EEG is its high temporal resolution, in the scale of milliseconds, while the main advantage of fMRI is the detection of functional activity with good spatial resolution. The advantages of each modality seem to complement each other, providing better insight in the neuronal activity of the brain. The main goal of combining information from both modalities is to increase the spatial and the temporal localization of the underlying neuronal activity captured by each modality. This paper presents a novel technique based on the combination of these two modalities (EEG, fMRI) that allow a better representation and understanding of brain activities in time. EEG is modeled as a sequence of topographies, based on the notion of microstates. Hidden Markov Models (HMMs) were used to model the temporal evolution of the topography of the average Event Related Potential (ERP). For each model the Fisher score of the sequence is calculated by taking the gradient of the trained model parameters. The Fisher score describes how this sequence deviates from the learned HMM. Canonical Partial Least Squares (CPLS) were used to decompose the two datasets and fuse the EEG and fMRI features. In order to test the effectiveness of this method, the results of this methodology were compared with the results of CPLS using the average ERP signal of a single channel. The presented methodology was able to derive components that co-vary between EEG and fMRI and present significant differences between the two tasks.

  16. Fractal Modeling and Scaling in Natural Systems - Editorial

    Science.gov (United States)

    The special issue of Ecological complexity journal on Fractal Modeling and Scaling in Natural Systems contains representative examples of the status and evolution of data-driven research into fractals and scaling in complex natural systems. The editorial discusses contributions to understanding rela...

  17. On scaling properties of cluster distributions in Ising models

    Science.gov (United States)

    Ruge, C.; Wagner, F.

    1992-01-01

    Scaling relations of cluster distributions for the Wolff algorithm are derived. We found them to be well satisfied for the Ising model in d=3 dimensions. Using scaling and a parametrization of the cluster distribution, we determine the critical exponent β/ν=0.516(6) with moderate effort in computing time.

  18. Reference Priors for the General Location-Scale Model

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately

  19. Scaling Properties of a Hybrid Fermi-Ulam-Bouncer Model

    Directory of Open Access Journals (Sweden)

    Diego F. M. Oliveira

    2009-01-01

    under the framework of scaling description. The model is described by using a two-dimensional nonlinear area preserving mapping. Our results show that the chaotic regime below the lowest energy invariant spanning curve is scaling invariant and the obtained critical exponents are used to find a universal plot for the second momenta of the average velocity.

  20. Power law cosmology model comparison with CMB scale information

    Science.gov (United States)

    Tutusaus, Isaac; Lamine, Brahim; Blanchard, Alain; Dupays, Arnaud; Zolnierowski, Yves; Cohen-Tanugi, Johann; Ealet, Anne; Escoffier, Stéphanie; Le Fèvre, Olivier; Ilić, Stéphane; Pisani, Alice; Plaszczynski, Stéphane; Sakr, Ziad; Salvatelli, Valentina; Schücker, Thomas; Tilquin, André; Virey, Jean-Marc

    2016-11-01

    Despite the ability of the cosmological concordance model (Λ CDM ) to describe the cosmological observations exceedingly well, power law expansion of the Universe scale radius, R (t )∝tn, has been proposed as an alternative framework. We examine here these models, analyzing their ability to fit cosmological data using robust model comparison criteria. Type Ia supernovae (SNIa), baryonic acoustic oscillations (BAO) and acoustic scale information from the cosmic microwave background (CMB) have been used. We find that SNIa data either alone or combined with BAO can be well reproduced by both Λ CDM and power law expansion models with n ˜1.5 , while the constant expansion rate model (n =1 ) is clearly disfavored. Allowing for some redshift evolution in the SNIa luminosity essentially removes any clear preference for a specific model. The CMB data are well known to provide the most stringent constraints on standard cosmological models, in particular, through the position of the first peak of the temperature angular power spectrum, corresponding to the sound horizon at recombination, a scale physically related to the BAO scale. Models with n ≥1 lead to a divergence of the sound horizon and do not naturally provide the relevant scales for the BAO and the CMB. We retain an empirical footing to overcome this issue: we let the data choose the preferred values for these scales, while we recompute the ionization history in power law models, to obtain the distance to the CMB. In doing so, we find that the scale coming from the BAO data is not consistent with the observed position of the first peak of the CMB temperature angular power spectrum for any power law cosmology. Therefore, we conclude that when the three standard probes (SNIa, BAO, and CMB) are combined, the Λ CDM model is very strongly favored over any of these alternative models, which are then essentially ruled out.

  1. Small-scale models of multiring basins

    Science.gov (United States)

    Allemand, Pascal; Thomas, Pierre

    1999-07-01

    Small-scale sand-silicone simulations of multiring impact structures have been undertaken in order to understand the effects of the rheology of the lithosphere on the variability of natural multiring structures. For low sand-silicone thickness ratio (1:3), brittle strain is accommodated by spiral strike-slip faults. For higher sand-silicone ratios (1:1 or 2:1), an inner concentric ring affected by strike-slip faults is relayed by an external ring affected by concentric normal faults. The diameter of the inner ring decreases with the increase of the sand-silicone thickness ratio. It is suggested that the flexure of the brittle layer due to the silicone flow is responsible for the brittle strain field which is enhanced by the channel flow of the lower crust. The characteristic geometry of the intersection of conjugated strike-slip faults can be observed around large multiring basins on silicate crust such as Orientale on the Moon and on icy crust, such as Valhalla on Callisto and Gilgamesh on Ganymede. The strain field around these large craters is discussed in terms of mechanical properties of the lithospheres. On the Moon, large craters without relaxation faults, such as Imbrium are located on thin crust regions. The crust was too thin to have a ductile lower layer at the time of impact. Gilgamesh on Ganymede is surrounded mainly by strike-slip faults. Asgard on Callisto has the same diameter as Gilgamesh but is surrounded by concentric normal faults. The brittle-ductile thickness ratio is thus higher on Callisto than on Ganymede.

  2. The Faddeev Model and Scaling in Quantum Chromodynamics

    CERN Document Server

    Widom, A; Srivastava, Y N

    2016-01-01

    The Faddeev two body bound state model is discussed as an example of a QCD inspired model thought by some to exhibit dimensional transmutation. This simple model is solved exactly and the growth of a specified dimensional energy scale is shown to be an illusion.

  3. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  4. SCALABLE PERCEPTUAL AUDIO REPRESENTATION WITH AN ADAPTIVE THREE TIME-SCALE SINUSOIDAL SIGNAL MODEL

    Institute of Scientific and Technical Information of China (English)

    Al-Moussawy Raed; Yin Junxun; Song Shaopeng

    2004-01-01

    This work is concerned with the development and optimization of a signal model for scalable perceptual audio coding at low bit rates. A complementary two-part signal model consisting of Sines plus Noise (SN) is described. The paper presents essentially a fundamental enhancement to the sinusoidal modeling component. The enhancement involves an audio signal scheme based on carrying out overlap-add sinusoidal modeling at three successive time scales,large, medium, and small. The sinusoidal modeling is done in an analysis-by-synthesis overlapadd manner across the three scales by using a psychoacoustically weighted matching pursuits.The sinusoidal modeling residual at the first scale is passed to the smaller scales to allow for the modeling of various signal features at appropriate resolutions. This approach greatly helps to correct the pre-echo inherent in the sinusoidal model. This improves the perceptual audio quality upon our previous work of sinusoidal modeling while using the same number of sinusoids. The most obvious application for the SN model is in scalable, high fidelity audio coding and signal modification.

  5. ScaleNet: a literature-based model of scale insect biology and systematics.

    Science.gov (United States)

    García Morales, Mayrolin; Denno, Barbara D; Miller, Douglass R; Miller, Gary L; Ben-Dov, Yair; Hardy, Nate B

    2016-01-01

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found on all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis and plant-insect interactions. ScaleNet was launched in 1995 to provide insect identifiers, pest managers, insect systematists, evolutionary biologists and ecologists efficient access to information about scale insect biological diversity. It provides comprehensive information on scale insects taken directly from the primary literature. Currently, it draws from 23,477 articles and describes the systematics and biology of 8194 valid species. For 20 years, ScaleNet ran on the same software platform. That platform is no longer viable. Here, we present a new, open-source implementation of ScaleNet. We have normalized the data model, begun the process of correcting invalid data, upgraded the user interface, and added online administrative tools. These improvements make ScaleNet easier to use and maintain and make the ScaleNet data more accurate and extendable. Database URL: http://scalenet.info.

  6. Large-scale structure of the Universe in unstable dark matter models

    Energy Technology Data Exchange (ETDEWEB)

    Doroshkevich, A.G.; Khlopov, M.U. (AN SSSR, Moscow (USSR). Inst. Prikladnoj Matematiki); Klypin, A.A. (Space Research Inst., Moscow (USSR))

    1989-08-15

    We discuss the formation and evolution of the large-scale structure in unstable dark matter (UDM) models. The main feature of the models is that galaxy formation starts after decays. We found reasonable agreement with the observed picture for models with mass of decaying particles 60-90 eV and decay time (0.3-1.5) x 10{sup 9} yr. Galaxy formation in UDM models starts at z = 3 if products of decays are relativistic at present or at least at z = 6-7 if the products are non-relativistic. (author).

  7. Tiled vector data model for the geographical features of symbolized maps

    Science.gov (United States)

    Zhu, Haihong; Li, You; Zhang, Hang

    2017-01-01

    Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and ‘addition’ (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity. PMID:28475578

  8. Scale-based spatial data model for GIS

    Institute of Scientific and Technical Information of China (English)

    WEI Zu-kuan

    2004-01-01

    Being the primary media of geographical information and the elementary objects manipulated, almost all of maps adopt the layer-based model to represent geographic information in the existent GIS. However, it is difficult to extend the map represented in layer-based model. Furthermore, in Web-Based GIS, It is slow to transmit the spatial data for map viewing. In this paper, for solving the questions above, we have proposed a new method for representing the spatial data. That is scale-based model. In this model we represent maps in three levels: scale-view, block, and spatial object, and organize the maps in a set of map layers, named Scale-View, which associates some given scales.Lastly, a prototype Web-Based GIS using the proposed spatial data representation is described briefly.

  9. Irrigated Lands and Features, hydrology data set attributes;ditches, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Irrigated Lands and Features dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006....

  10. Irrigated Lands and Features, d3pipeline - utm, Published in 2005, 1:24000 (1in=2000ft) scale, Grand County Road Department.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Irrigated Lands and Features dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other information as of 2005. It is described as...

  11. Irrigated Lands and Features, d1pipeline - utm, Published in 2005, 1:24000 (1in=2000ft) scale, Grand County Road Department.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Irrigated Lands and Features dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other information as of 2005. It is described as...

  12. Irrigated Lands and Features, hydrology data set attributes;ditches, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Irrigated Lands and Features dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006. It...

  13. Towards convection-resolving, global atmospheric simulations with the Model for Prediction Across Scales (MPAS: an extreme scaling experiment

    Directory of Open Access Journals (Sweden)

    D. Heinzeller

    2015-08-01

    Full Text Available The Model for Prediction Across Scales (MPAS is a novel set of earth-system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This makes MPAS a promising tool for conducting climate-related impact studies of, for example, land use changes in a consistent approach. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different High Performance Computing sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African Monsoon and its associated precipitation. Comparing 11 month runs for two meshes with observations and a Weather Research & Forecasting tool (WRF reference model, we show that MPAS can reproduce the atmospheric dynamics on global and local scales, but that further optimisation is required to address a precipitation excess for this region. Finally, we conduct extreme scaling tests on a global 3 km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70 % parallel efficiency or better of the MPAS model and provide numbers on the computational requirements for experiments with the 3 km mesh. In doing so, we show that global, convection-resolving atmospheric

  14. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  15. Orbital and millennial-scale features of atmospheric CH4 over the past 800,000 years.

    Science.gov (United States)

    Loulergue, Laetitia; Schilt, Adrian; Spahni, Renato; Masson-Delmotte, Valérie; Blunier, Thomas; Lemieux, Bénédicte; Barnola, Jean-Marc; Raynaud, Dominique; Stocker, Thomas F; Chappellaz, Jérôme

    2008-05-15

    Atmospheric methane is an important greenhouse gas and a sensitive indicator of climate change and millennial-scale temperature variability. Its concentrations over the past 650,000 years have varied between approximately 350 and approximately 800 parts per 10(9) by volume (p.p.b.v.) during glacial and interglacial periods, respectively. In comparison, present-day methane levels of approximately 1,770 p.p.b.v. have been reported. Insights into the external forcing factors and internal feedbacks controlling atmospheric methane are essential for predicting the methane budget in a warmer world. Here we present a detailed atmospheric methane record from the EPICA Dome C ice core that extends the history of this greenhouse gas to 800,000 yr before present. The average time resolution of the new data is approximately 380 yr and permits the identification of orbital and millennial-scale features. Spectral analyses indicate that the long-term variability in atmospheric methane levels is dominated by approximately 100,000 yr glacial-interglacial cycles up to approximately 400,000 yr ago with an increasing contribution of the precessional component during the four more recent climatic cycles. We suggest that changes in the strength of tropical methane sources and sinks (wetlands, atmospheric oxidation), possibly influenced by changes in monsoon systems and the position of the intertropical convergence zone, controlled the atmospheric methane budget, with an additional source input during major terminations as the retreat of the northern ice sheet allowed higher methane emissions from extending periglacial wetlands. Millennial-scale changes in methane levels identified in our record as being associated with Antarctic isotope maxima events are indicative of ubiquitous millennial-scale temperature variability during the past eight glacial cycles.

  16. Middle school teachers' featured attitude, behavior for prevention of violence in schools; a sudy, development scale in Turkey

    Directory of Open Access Journals (Sweden)

    Nazan Savas

    2015-06-01

    Full Text Available Aim: The aim was to determine the middle school teachers' featured behavior to prevent violent and to create a scale. Method: 232 teachers who represent population of Antakya participated. The Scale that is created by the researchers were performed. The validity and reliability analysis were done. Result: Kaiser-Mayer-Olkin value of the scale is 0.873, result of the Barlett test is 2505.7 (p<0.001 and Cronbach Alpha reliability coefficient is 0.902. The rate of total variance explained on this scala is 61.9% and it consist of seven factors. These factors, respectively; 1.There were student-centered exercises, which enable students to participate, and evaluation of exercises regularly. 2.There were creating educational environment that improve the students' communication skils and their self-confidence with exercises that are parent-centered. 3.It was about improving students' social relations and their defence skils against violence with teachers' response in the time, place and situation which carry risk for students. 4.There was teachers' contribution to programs that prevent to violence. 5.There were establishing class rules together and taking part in applying these rules. 6.There was a teachers' exercise to make students come together to do an activity against violence. 7.It includes teachers being not prejudiced towards their students and not hiding violent events. Conclusion: As a result theachers' attributes and behaviors which include parents, theachers, students and school environment take an important place to prevent violance in schools. Theachers' situation that is about preventing the violence can be evaluated with the scale that includes these attributes and behaviors [TAF Prev Med Bull 2015; 14(3.000: 247-256

  17. 2D-HIDDEN MARKOV MODEL FEATURE EXTRACTION STRATEGY OF ROTATING MACHINERY FAULT DIAGNOSIS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new feature extraction method based on 2D-hidden Markov model(HMM) is proposed.Meanwhile the time index and frequency index are introduced to represent the new features. The new feature extraction strategy is tested by the experimental data that collected from Bently rotor experiment system. The results show that this methodology is very effective to extract the feature of vibration signals in the rotor speed-up course and can be extended to other non-stationary signal analysis fields in the future.

  18. Full-Scale Cookoff Model Validation Experiments

    Energy Technology Data Exchange (ETDEWEB)

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  19. Genome-scale modeling for metabolic engineering

    Energy Technology Data Exchange (ETDEWEB)

    Simeonidis, E; Price, ND

    2015-01-13

    We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.

  20. Genome-scale modeling for metabolic engineering.

    Science.gov (United States)

    Simeonidis, Evangelos; Price, Nathan D

    2015-03-01

    We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.