Zacharia, Kurien; Varghese, Surekha Mariam
A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.
Haapala, O. (Olli)
This thesis was set to study the utilization of the MathWorks’ Simulink® program in model based application software development and its compatibility with the Vacon 100 inverter. The target was to identify all the problems related to everyday usage of this method and create a white paper of how to execute a model based design to create a Vacon 100 compatible system software. Before this thesis was started, there was very little knowledge of the compatibility of this method. However durin...
Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald
This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.
Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The
Kosmidis, Ioannis; Karlis, Dimitris
The majority of model-based clustering techniques is based on multivariate Normal models and their variants. In this paper copulas are used for the construction of flexible families of models for clustering applications. The use of copulas in model-based clustering offers two direct advantages over current methods: i) the appropriate choice of copulas provides the ability to obtain a range of exotic shapes for the clusters, and ii) the explicit choice of marginal distributions for the cluster...
MA; Jin; HAN; Dong; HE; RenMu
Load model is one of the most important elements in power system operation and control. However, owing to its complexity, load modeling is still an open and very difficult problem. Summarizing our work on measurement-based load modeling in China for more than twenty years, this paper systematically introduces the mathematical theory and applications regarding the load modeling. The flow chart and algorithms for measurement-based load modeling are presented. A composite load model structure with 13 parameters is also proposed. Analysis results based on the trajectory sensitivity theory indicate the importance of the load model parameters for the identification. Case studies show the accuracy of the presented measurement-based load model. The load model thus built has been validated by field measurements all over China. Future working directions on measurement- based load modeling are also discussed in the paper.
ZENG Hongwei; MIAO Huaikou
A formal model representing the navigation behavior of a Web application as the Kripke structure is proposed and an approach that applies model checking to test case generation is presented. The Object Relation Diagram as the object model is employed to describe the object structure of a Web application design and can be translated into the behavior model. A key problem of model checking-based test generation for a Web application is how to construct a set of trap properties that intend to cause the violations of model checking against the behavior model and output of counterexamples used to construct the test sequences.We give an algorithm that derives trap properties from the object model with respect to node and edge coverage criteria.
The approach used in model based design is to build the model of the system in graphical/textual language. In older model based design approach, the correctness of the model is usually established by simulation. Simulation which is analogous to testing, cannot guarantee that the design meets the system requirements under all possible scenarios. This is however possible if the modeling language is based on formal semantics so that the developed model can be subjected to formal verification of properties based on specification. The verified model can then be translated into an implementation through reliable/verified code generator thereby reducing the necessity of low level testing. Such a methodology is admissible as per guidelines of IEC60880 standard applicable to software used in computer based systems performing category A functions in nuclear power plant and would also be acceptable for category B functions. In this article, the experience in implementation and formal verification of important controllers used in the process control system of a nuclear reactor. We have used The SCADE (Safety Critical System Analysis and Design Environment) environment to model the controllers. The modeling language used in SCADE is based on the synchronous dataflow model of computation. A set of safety properties has been verified using formal verification technique
Toshio Kodama; Tosiyasu L. Kunii; Yoichi Seki
A cellular model based on the Incrementally Modular Abstraction Hierarchy (IMAH) is a novel model that can represent the architecture of and changes in cyberworlds, preserving invariants from a general level to a specific one. We have developed a data processing system called the Cellular Data System (CDS). In the development of business applications, you can prevent combinatorial explosion in the process of business design and testing by using CDS. In this paper, we have first designed and implemented wide-use algebra on the presentation level. Next, we have developed and verified the effectiveness of two general business applications using CDS: 1) a customer information management system, and 2) an estimate system.
Muhammad Shahbani Abu Bakar
Full Text Available Analysis and design are very important roles in the Data Warehouse (DW system development and forms as a backbone of any successful or failure of the DW project. The emerging trends of analytic-based application required the DW system to be implemented in the mobile environment. However, current analysis and design approaches are based on existing DW environments that focusing on the deployment of the DW system in traditional web-based applications. This will create the limitations on user accessed and the used of analytical information by the decision makers. Consequently, this will prolong the adoption of analytic-based applications to the users and organizations. This research aims to suggest an approach for modeling the DW and design the DW system on the mobile environments. A variant dimension of modeling techniques was used to enhance the DW schemas in order to accommodate the requirements of mobile characteristics in the DW design. A proposed mobile DW system was evaluated by expert review, and support the success of mobile DW-based application implementation
吴宏鑫; 王迎春; 邢琰
This paper presents a new intelligent control method based on intelligent characteristic model for a kind of complicated plant with nonlinearities and uncertainties, whose controlled output variables cannot be measured on line continuously. The basic idea of this method is to utilize intelligent techniques to form the characteristic model of the controlled plant according to the principle of combining the char-acteristics of the plant with the control requirements, and then to present a new design method of intelli-gent controller based on this characteristic model. First, the modeling principles and expression of the intelligent characteristic model are presented. Then based on description of the intelligent characteristic model, the design principles and methods of the intelligent controller composed of several open-loops and closed-loops sub controllers with qualitative and quantitative information are given. Finally, the ap-plication of this method in alumina concentration control in the real aluminum electrolytic process is in-troduced. It is proved in practice that the above methods not only are easy to implement in engineering design but also avoid the trial-and-error of general intelligent controllers. It has taken better effect in the following application: achieving long-term stable control of low alumina concentration and increasing the controlled ratio of anode effect greatly from 60% to 80%.
Nowakowski, Antoni; Kaczmarek, Mariusz; Ruminski, Jacek; Hryciuk, Marcin; Renkielska, Alicja; Grudzinski, Jacek; Siebert, Janusz; Jagielak, Dariusz; Rogowski, Jan; Roszak, Krzysztof; Stojek, Wojciech
The proposal to use active thermography in medical diagnostics is promising in some applications concerning investigation of directly accessible parts of the human body. The combination of dynamic thermograms with thermal models of investigated structures gives attractive possibility to make internal structure reconstruction basing on different thermal properties of biological tissues. Measurements of temperature distribution synchronized with external light excitation allow registration of dynamic changes of local temperature dependent on heat exchange conditions. Preliminary results of active thermography applications in medicine are discussed. For skin and under- skin tissues an equivalent thermal model may be determined. For the assumed model its effective parameters may be reconstructed basing on the results of transient thermal processes. For known thermal diffusivity and conductivity of specific tissues the local thickness of a two or three layer structure may be calculated. Results of some medical cases as well as reference data of in vivo study on animals are presented. The method was also applied to evaluate the state of the human heart during the open chest cardio-surgical interventions. Reference studies of evoked heart infarct in pigs are referred, too. We see the proposed new in medical applications technique as a promising diagnostic tool. It is a fully non-invasive, clean, handy, fast and affordable method giving not only qualitative view of investigated surfaces but also an objective quantitative measurement result, accurate enough for many applications including fast screening of affected tissues.
Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.
Aliev, Rafik; Memmedova, Konul
Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS), Test of Attention (D2 Test), and Spielberger's Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented. PMID:26339231
Wu, Xiaojun; Liu, Weijun; Wang, Tianran
Traditionally, 3D models, even so called solid ones, can only represent the object's surface information, and the interior is regarded as homogeneous. In most applications, it is necessary to represent the interior structures and attributes of an object, such as materials, density and color, etc. Surface model is incapable of bearing this task. In this case, voxel model is a good choice. Voxelization is the process of converting a geometrically represented 3D object into a three dimensional volume of dataset. In this paper, an algorithm is proposed to voxelize the polygonal meshes ported from current CAD modeling packages into volume datasets based on the easily indexing property of Octree structure. The minimal distance to the feature voxel (or voxels) is taken as criterion to distribute different material compositions to get a new kind of material called FGM (functionally graded material), which is suitable for the interface of RPM (Rapid Prototyping Manufacturing).
Jack, James T.; Delashmit, Walter H.
A long standing need for the application of laser radar (LADAR) to a wider range of targets is a technique for creating a "target model" from target photographs. This is feasible since LADAR images are 3D and photographs at selected azimuth/elevation angles will allow the required models to be created. Preferred photographic images of a wide range of selected targets were specified and collected. These photographs were processed using code developed in house and some commercial software packages. These "models" were used in model-based automatic target recognition (ATR) algorithms. The ATR performance was excellent. This technique differs significantly from other techniques for creating target models. Those techniques require CAD models which are much harder to manipulate and contain extraneous detail. The technique in this paper develops the photographic-based target models in component form so that any component (e.g., turret of a tank) can be independently manipulated, such as rotating the turret. This new technique also allows models to be generated for targets for which no actual LADAR data has ever been collected. A summary of the steps used in the modeling process is as follows: start with a set of input photographs, calibrate the imagery into a 3D world space to generate points corresponding to target features, create target geometry by connecting points with surfaces, mark all co-located points in each image view and verify alignment of points, place in a 3D space, create models by creating surfaces (i.e., connect points with planar curves) and scale target into real-world coordinates.
Žic, Tomislav; Temmer, Manuela; Vršnak, Bojan
The drag-based model (DBM) is an analytical model which is usually used for calculating kinematics of coronal mass ejections (CMEs) in the interplanetary space, prediction of the CME arrival times and impact speeds at arbitrary targets in the heliosphere. The main assumption of the model is that beyond a distance of about 20 solar radii from the Sun, the drag is dominant in the interplanetary space. The previous version of DBM relied on the rough assumption of averaged, unperturbed and constant environmental conditions as well as constant CME properties throughout the entire interplanetary CME propagation. The continuation of our work consists of enhancing the model into a form which uses a time dependent and perturbed environment without constraints on CME properties and distance forecasting. The extension provides the possibility of application in various scenarios, such as automatic least-square fitting on initial CME kinematic data suitable for a real-time forecasting of CME kinematics, or embedding the DBM into pre-calculated interplanetary ambient conditions provided by advanced numerical simulations (for example, codes of ENLIL, EUHFORIA, etc.). A demonstration of the enhanced DBM is available on the web-site: http://www.geof.unizg.hr/~tzic/dbm.html. We acknowledge the support of European Social Fund under the "PoKRet" project.
Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)
The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.
Bona Lu; Nan Zhang; Wei Wang; Jinghai Li
Recently,EMMS-based models are being widely applied in simulations of high-throughput circulating fluidized beds (CFBs) with fine particles.Its use for low flux systems,such as CFB boiler (CFBB),still remains unexplored.In this work,it has been found that the original definition of cluster diameter in EMMS model is unsuitable for simulations of the CFB boiler with low solids flux.To remedy this,we propose a new model of cluster diameter.The EMMS-based drag model (EMMS/matrix model) with this revised cluster definition is validated through the computational fluid dynamics (CFD) simulation of a CFB boiler.
Shapiro, Linda G.
A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.
The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs
Pavšelj, Nataša; Miklavčič, Damijan
Background. Numerous experiments have to be performed before a biomedical application is put to practical use in clinical environment. As a complementary work to in vitro, in vivo and medical experiments, we can use analytical and numerical models to represent, as realistically as possible, real biological phenomena of, in our case, electroporation. In this way we canevaluate different electrical parameters in advance, such as pulse amplitude, duration, number of pulses, or different electrod...
Miklavčič, Damijan; Pavšelj, Nataša
Background. Numerous experiments have to be performed before a biomedical application is put to practical use in clinical environment. As a complementary work to in vitro, in vivo and medical experiments, we can use analytical and numerical models to represent, as realistically as possible, real biological phenomena of, in our case, electroporation. In this way we canevaluate different electrical parameters in advance, such as pulse amplitude, duration, number of pulses, or different electrod...
Toma, M; Busam, A; Ortmaier, T; Raczkowsky, J.; Höpner, C; Marmulla1, R.
This paper presents our research in medical workflow modeling for computer- and robot-based surgical intervention in maxillofacial surgery. Our goal is to provide a method for clinical workflow modeling including workflow definition for pre- and intra-operative steps, analysis of new methods for combining conventional surgical procedures with robot- and computer-assisted procedures and facilitate an easy implementation of hard- and software systems.
Yan, Lei; Krozer, Viktor; Michaelsen, Rasmus Schandorph; Djurhuus, Torsten; Johansen, Tom Keinicke
In this work, a physical Schottky barrier diode model is presented. The model is based on physical parameters such as anode area, Ohmic contact area, doping profile from epitaxial (EPI) and substrate (SUB) layers, layer thicknesses, barrier height, specific contact resistance, and device...... temperature. The effects of barrier height lowering, nonlinear resistance from the EPI layer, and hot electron noise are all included for accurate characterization of the Schottky diode. To verify the diode model, measured I-V and C-V characteristics are compared with the simulation results. Due to the lack...
Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh
A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...... the algorithm is capable of detecting four different faults in the mechanical and hydraulic parts of the pump....
Pradhan, Biswajeet; Lee, Saro; Buchroithner, Manfred F.
This paper presents the assessment results of spatially based probabilistic three models using Geoinformation Techniques (GIT) for landslide susceptibility analysis at Penang Island in Malaysia. Landslide locations within the study areas were identified by interpreting aerial photographs, satellite images and supported with field surveys. Maps of the topography, soil type, lineaments and land cover were constructed from the spatial data sets. There are ten landslide related factors were extracted from the spatial database and the frequency ratio, fuzzy logic, and bivariate logistic regression coefficients of each factor was computed. Finally, landslide susceptibility maps were drawn for study area using frequency ratios, fuzzy logic and bivariate logistic regression models. For verification, the results of the analyses were compared with actual landslide locations in study area. The verification results show that bivariate logistic regression model provides slightly higher prediction accuracy than the frequency ratio and fuzzy logic models.
Rafik Aliev; Konul Memmedova
Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS), Test of Attention (D2 Test), and Spielberger’s Anxiety Test completed by s...
Gu, X.; Blackmore, K. L.
This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…
Vengattaraman, T; Baskaran, R
Cloud computing is an emerging platform of service computing designed for swift and dynamic delivery of assured computing resources. Cloud computing provide Service-Level Agreements (SLAs) for guaranteed uptime availability for enabling convenient and on-demand network access to the distributed and shared computing resources. Though the cloud computing paradigm holds its potential status in the field of distributed computing, cloud platforms are not yet to the attention of majority of the researchers and practitioners. More specifically, still the researchers and practitioners community has fragmented and imperfect knowledge on cloud computing principles and techniques. In this context, one of the primary motivations of the work presented in this paper is to reveal the versatile merits of cloud computing paradigm and hence the objective of this work is defined to bring out the remarkable significances of cloud computing paradigm through an application environment. In this work, a cloud computing model for sof...
Full Text Available Infrastructure underlying the distributed information systems is heterogeneous and very complex. Middleware allows the development of distributed information systems, without knowing the functioning details of an infrastructure, by its abstracting. An essential issue on designing such systems is represented by choosing the middleware technologies. An architectural model of a SCADA system based on middleware is proposed in this paper. This system is formed of servers that centralize data and clients, which receive information from a server, thus allowing the chart displaying of such information. All these components own a specific functionality and can exchange information, by means of a middleware bus. A middleware bus signifies a software bus, where more middleware technologies can coexist.
Schindewolf, Marcus; Herrmann, Marie-Kristin; Herrmann, Anne-Katrin; Schultze, Nico; Amorim, Ricardo S. S.; Schmidt, Jürgen
The study region along the BR 16 highway belongs to the "Deforestation Arc" at the southern border of the Amazon rainforest. At the same time, it incorporates a land use gradient as colonization started in the 1975-1990 in Central Mato Grosso in 1990 in northern Mato Grosso and most recently in 2004-2005 in southern Pará. Based on present knowledge soil erosion is one of the key driver of soil degradation. Hence, there is a strong need to implement soil erosion control measures in eroding landscapes. Planning and dimensioning of such measures require reliable and detailed information on the temporal and spatial distribution of soil loss, sediment transport and deposition. Soil erosion models are increasingly used, in order to simulate the physical processes involved and to predict the effects of soil erosion control measures. The process based EROSION 3D simulation model is used for surveying soil erosion and deposition on regional catchments. Although EROSION 3D is a widespread, extensively validated model, the application of the model on regional scale remains challenging due to the enormous data requirements and complex data processing operations. In this context the study includes the compilation, validation and generalisation of existing land use and soil data in order to generate a consistent EROSION 3D input datasets. As a part of this process a GIS-linked data base application allows to transfer the original soil and land use data into model specific parameter files. This combined methodology provides different risk assessment maps for certain demands on regional scale. Besides soil loss and sediment transport, sediment pass over points into surface water bodies and particle enrichment can be simulated using the EROSION 3D model. Thus the estimation of particle bound nutrient and pollutant inputs into surface water bodies becomes possible. The study ended up in a user-friendly, timesaving and improved software package for the simulation of soil loss and
Lerner, Bao-Ting; Morelli, Michael V.; Thomas, Hans J.
This paper extends our previous research on a highly structured and compact algebraic representation of grey-level images. Addition and multiplication are defined for the set of all grey-level images, which can then be described as polynomials of two variables. Utilizing this new algebraic structure, we have devised an innovative, efficient edge detection scheme.We have developed a robust method for linear feature extraction by combining the techniques of a Hough transform and a line follower with this new edge detection scheme. The major advantage of this feature extractor is its general, object-independent nature. Target attributes, such as line segment lengths, intersections, angles of intersection, and endpoints are derived by the feature extraction algorithm and employed during model matching. The feature extractor and model matcher are being incorporated into a distributed robot control system. Model matching is accomplished using both top-down and bottom-up processing: a priori sensor and world model information are used to constrain the search of the image space for features, while extracted image information is used to update the model.
Full Text Available Universities and Institutions these days’ deals with issues related to with assessment of large number ofstudents. Various evaluation methods have been adopted by examiners in different institutions to examiningthe ability of an individual, starting from manual means of using paper and pencil to electronic, from oralto written, practical to theoretical and many others.There is a need to expedite the process of examination in order to meet the increasing enrolment of studentsat the universities and institutes. Sip Based Mass Mobile Examination System (SiBMMES expedites theexamination process by automating various activities in an examination such as exam paper setting,Scheduling and allocating examination time and evaluation (auto-grading for objective questions etc.SiBMMES uses the IP Multimedia Subsystem (IMS that is an IP communications framework providing anenvironment for the rapid development of innovative and reusable services Session Initial Protocol (SIP isa signalling (request-response protocol for this architecture and it is used for establishing sessions in anIP network, making it an ideal candidate for supporting terminal mobility in the IMS to deliver the services,with the extended services available in IMS like open APIs, common network services, Quality of Services(QoS like multiple sessions per call, Push to Talk etc often requiring multiple types of media (includingvoice, video, pictures, and text. SiBMMES is an effective solution for mass education evaluation usingmobile and web technology.In this paper, a novel hybrid component based development (CBD model is proposed for SiBMMES. AComponent based Hybrid Model is selected to the fact that IMS takes the concept of layered architectureone step further by defining a horizontal architecture where service enablers and common functions can bereused for multiple applications. This novel model tackle a new domain for IT professionals, its ideal tostart developing services as a small
We recommend that several conceptual modifications be incorporated into the state-and-transition model (STM) framework to: 1) explicitly link this framework to the concept of ecological resilience, 2) direct management attention away from thresholds and toward the maintenance of state resilience, an...
Full Text Available The principal function of a grating Bragg is filtering, which can be used in optical fibers based component and active or passive semi conductors based component, as well as telecommunication systems. Their ideal use is with lasers with fiber, amplifiers with fiber or Laser diodes. In this work, we are going to show the principal results obtained during the analysis of various types of grating Bragg by the method of the coupled modes. We then present the operation of DBR are tunable. The use of Bragg gratings in a laser provides single-mode sources, agile wavelength. The use of sampled grating increases the tuning range.
Richard V. Burkhauser; Butler, J. S.; Yang-Woo Kim
We use a choice-based subsample of Social Security Disability Insurance applicants from the 1978 Social Security Survey of Disability and Work to test the importance of policy variables on the timing of application for disability insurance benefits following the onset of a work limiting health condition. We correct for choice-based sampling by extending the Manski-Lerman (1977) correction to the likelihood function of our continuous time hazard model defined with semiparametric unmeasured het...
Hoffman, Edward L.
An application for generating a property set associated with a constitutive model of a material includes a first program module adapted to receive test data associated with the material and to extract loading conditions from the test data. A material model driver is adapted to receive the loading conditions and a property set and operable in response to the loading conditions and the property set to generate a model response for the material. A numerical optimization module is adapted to receive the test data and the model response and operable in response to the test data and the model response to generate the property set.
Breckenkamp, J; Neitzke, H P; Bornkessel, C;
Applicability of a model to estimate radiofrequency electromagnetic field (RF-EMF) strength in households from mobile phone base stations was evaluated with technical data of mobile phone base stations available from the German Net Agency, and dosimetric measurements, performed in an...
TANG Xiao-mei; SUN Li
From the point of constructing e-commerce application system, based on the structured analysis and ObjectOriented Design method, a combined modeling method Business-Process Driven(BPD) is proposed. This method focuses on the business process through the development process of the system. First, the business model of the system, then commercial object model is introduced according to the business model. At last the COM-model for the system is established. The system is implemented in an iterative and incremental way. The design and analysis result of each stage is illustrated by series of views using the modeling tool UML.
Cornelia Gyorödi; Robert Gyorödi; Roxana Sotoc
The purpose of this paper is to present a comparative study between relational and non-relational database models in a web-based application, by executing various operations on both relational and on non-relational databases thus highlighting the results obtained during performance comparison tests. The study was based on the implementation of a web-based application for population records. For the non-relational database, we used MongoDB and for the relational database, we used MSSQL 2014. W...
Kuan Chee Houng
Full Text Available The Microsoft-based mobile sales management application is a sales force management application that currently running on Windows Mobile 6.5. It handles sales-related activity and cuts down the administrative task of sales representative. Then, Windows launch a new mobile operating system, Windows Phone and stop providing support to Windows Mobile. This has become an obstacle for Windows Mobile development. From time to time, Windows Mobile will be eliminated from the market due to no support provided by Microsoft. Besides that, Windows Mobile application cannot run on Windows Phone mobile operating system due to lack of compatibility. Therefore, applications those run on Windows Mobile need to find a solution addressing this problem. The rise of cloud computing technology in delivering software as a service becomes a solution. The Microsoft-based mobile sales management application delivers a service to run in a web browser, rather than limited by certain type of mobile that run the Windows Mobile operating system. However, there are some security issues need to concern in order to deliver the Microsoft-based mobile application as a service in private cloud computing. Therefore, security model is needed to answer the security issues in private cloud computing. This research is to propose a security model for the Microsoft-based mobile sales management application in private cloud computing. Lastly, a User Acceptance Test (UAT is carried out to test the compatibility between proposed security model of Microsoft-based mobile sales management application in a private cloud and tablet computers.
Hussain Mohammad Abu-Dalbouh
Full Text Available Healthcare professionals spend much of their time wandering between patients and offices, while the supportive technology stays stationary. Therefore, mobile applications has adapted for healthcare industry. In spite of the advancement and variety of available mobile based applications, there is an eminent need to investigate the current position of the acceptance of those mobile health applications that are tailored towards the tracking patients condition, share patients information and access. Consequently, in this study Technology Acceptance Model has designed to investigate the user acceptance of mobile technology application within healthcare industry. The purpose of this study is to design a quantitative approach based on the technology acceptance model questionnaire as its primary research methodology. It utilized a quantitative approach based a Technology Acceptance Model (TAM to evaluate the system mobile tracking Model. The related constructs for evaluation are: Perceived of Usefulness, Perceived Ease of Use, User Satisfaction and Attribute of Usability. All these constructs are modified to suit the context of the study. Moreover, this study outlines the details of each construct and its relevance toward the research issue. The outcome of the study represents series of approaches that will apply for checking the suitability of a mobile tracking on patient progress application for health care industry and how well it achieves the aims and objectives of the design.
Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed
Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.
Steinley, Gary; Reisetter, Marcy; Penrod, Kathryn; Haar, Jean; Ellingson, Janna
Model-Based Instruction (MBI) plays a significant role in the undergraduate teacher education program at South Dakota State University. Integrated into the program 8 years ago, the understandings and applications of MBI have evolved into a powerful and comprehensive framework that leads to rich and varied instruction with students directly in the…
This study presents the application of a set pair analysis-based similarity forecast (SPA-SF) model and wavelet denoising to forecast annual runoff. The SPA-SF model was built from identical, discrepant and contrary viewpoints. The similarity between estimated and historical data can be obtained. The weighted average of the annual runoff values characterized by the highest connection coefficients was regarded as the predicted value of the estimated annual runoff. In addition, runoff time seri...
TANGXinhuai; ZHANGYaying; YAOYinxiong; YOUJinyuan
In mobile agent systems, an application may be composed of several mobile agents that cooperatively perform a task. Multiple mobile agents need to communicate and interact with each other to accomplish their cooperative goal. Coordination model aims to provide solutions to interactions between concurrent activities, hiding the computing details and focusing on interaction between activities. A Context-aware coordination model (CACM), which combines mobility and coordination, is proposed for mobile agent applications, i.e. in mobile agent based information retrieval applications. The context-aware coordination model transfers interactions between agents from globally coupling interactions to locally uncoupling tuple space interactions. In addition, programmable tuple space is adopted to solve the problems of context-aware coordination introduced by mobility and data heterogeneity in mobile agent systems. Furthermore, environment specific and application specific coordination policy can be integrated into the programmable tuple space for customized requirements. Finally an application sample system-information retrieval in mobile agent applications is carried out to test the performance of the proposed model.
Jiang, Wenping; Zou, Ziming
With the development of research on space environment and space science, how to develop network online computing environment of space weather, space environment and space physics models for Chinese scientific community is becoming more and more important in recent years. Currently, There are two software modes on space physics multi-model application integrated system (SPMAIS) such as C/S and B/S. the C/S mode which is traditional and stand-alone, demands a team or workshop from many disciplines and specialties to build their own multi-model application integrated system, that requires the client must be deployed in different physical regions when user visits the integrated system. Thus, this requirement brings two shortcomings: reducing the efficiency of researchers who use the models to compute; inconvenience of accessing the data. Therefore, it is necessary to create a shared network resource access environment which could help users to visit the computing resources of space physics models through the terminal quickly for conducting space science research and forecasting spatial environment. The SPMAIS develops high-performance, first-principles in B/S mode based on computational models of the space environment and uses these models to predict "Space Weather", to understand space mission data and to further our understanding of the solar system. the main goal of space physics multi-model application integration system (SPMAIS) is to provide an easily and convenient user-driven online models operating environment. up to now, the SPMAIS have contained dozens of space environment models , including international AP8/AE8、IGRF、T96 models，and solar proton prediction model、geomagnetic transmission model，etc. which are developed by Chinese scientists. another function of SPMAIS is to integrate space observation data sets which offers input data for models online high-speed computing. In this paper, service-oriented architecture (SOA) concept that divides
The papers in this volume start with a description of the construction of reduced models through a review of Proper Orthogonal Decomposition (POD) and reduced basis models, including their mathematical foundations and some challenging applications, then followed by a description of a new generation of simulation strategies based on the use of separated representations (space-parameters, space-time, space-time-parameters, space-space,…), which have led to what is known as Proper Generalized Decomposition (PGD) techniques. The models can be enriched by treating parameters as additional coordinates, leading to fast and inexpensive online calculations based on richer offline parametric solutions. Separated representations are analyzed in detail in the course, from their mathematical foundations to their most spectacular applications. It is also shown how such an approximation could evolve into a new paradigm in computational science, enabling one to circumvent various computational issues in a vast array of...
Kim, Youngho; O'Kelly, Morton
This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.
Knowledge discovery from data directly can hardly avoid the fact that it is biased towards the collected experimental data, whereas, expert systems are always baffled with the manual knowledge acquisition bottleneck. So it is believable that integrating the knowledge embedded in data and those possessed by experts can lead to a superior modeling approach. Aiming at the classification problems, a novel integrated knowledge-based modeling methodology, oriented by experts and driven by data, is proposed. It starts from experts identifying modeling parameters, and then the input space is partitioned followed by fuzzification. Afterwards, single rules are generated and then aggregated to form a rule base. on which a fuzzy inference mechanism is proposed. The experts are allowed to make necessary changes on the rule base to improve the model accuracy. A real-world application, welding fault diagnosis, is presented to demonstrate the effectiveness of the methodology.
NI Yong-ming; OUYANG Zhi-yun; WANG Xiao-ke
This study improved the application of the Holdridge life-zone model to simulate the distribution of desert vegetation in China which gives statistics to support eco-recovery and ecosystem reconstruction in desert area. This study classified the desert vegetation into four types: (1) LAD: little arbor desert; (2) SD: shrub desert; (3) HLHSD: half-shrub, little half-shrub desert; (4) LHSCD: little halfshrub cushion desert. Based on the classification of Xinjiang desert vegetation, the classical Holdridge life-zone model was used to simulate Xinjiang desert vegetation's distribution and compare the Kappa coefficient result of the model with table of accuracy represented by Kappa values. The Kappa value of the model was only 0.19, it means the simulation result was poor. To improve the life-zone model application to Xinjiang desert vegetation type, a set of plot standards for terrain factors was developed by using the plot standard as the reclassification criterion to climate sub-regime. Then the desert vegetation in Xinjiang was simulated. The average Kappa value of the second simulation to the respective climate regime was 0.45. The Kappa value of final modeling result was 0.64, which is the better value.The modification of the model made it in more application region. In the end, the model' s ecological relevance to the Xinjiang desert vegetation types was studied.
Costan, Alexandru; Stratan, Corina; Dobre, Ciprian; Leordeanu, Catalin; Cristea, Valentin
With recent increasing computational and data requirements of scientific applications, the use of large clustered systems as well as distributed resources is inevitable. Although executing large applications in these environments brings increased performance, the automation of the process becomes more and more challenging. While the use of complex workflow management systems has been a viable solution for this automation process in business oriented environments, the open source engines available for scientific applications lack some functionalities or are too difficult to use for non-specialists. In this work we propose an architectural model for a grid based workflow management platform providing features like an intuitive way to describe workflows, efficient data handling mechanisms and flexible fault tolerance support. Our integrated solution introduces a workflow engine component based on ActiveBPEL extended with additional functionalities and a scheduling component providing efficient mapping between ta...
Thyagarajan, T.; Ponnavaikko, M. [Crescent Engineering Coll., Madras (India); Shanmugam, J. [Madras Inst. of Tech. (India); Panda, R.C.; Rao, P.G. [Central Leather Research Inst., Madras (India)
This paper reviews the developments in the model based control of drying systems using Artificial Neural Networks (ANNs). Survey of current research works reveals the growing interest in the application of ANN in modeling and control of non-linear, dynamic and time-variant systems. Over 115 articles published in this area are reviewed. All landmark papers are systematically classified in chronological order, in three distinct categories; namely, conventional feedback controllers, model based controllers using conventional methods and model based controllers using ANN for drying process. The principles of ANN are presented in detail. The problems and issues of the drying system and the features of various ANN models are dealt with up-to-date. ANN based controllers lead to smoother controller outputs, which would increase actuator life. The paper concludes with suggestions for improving the existing modeling techniques as applied to predicting the performance characteristics of dryers. The hybridization techniques, namely, neural with fuzzy logic and genetic algorithms, presented, provide, directions for pursuing further research for the implementation of appropriate control strategies. The authors opine that the information presented here would be highly beneficial for pursuing research in modeling and control of drying process using ANN. 118 refs.
Bonavoglia, M.; Casadei, G.; Zarri, L.; Mengoni, M.; Tani, A.; Serra, G.; Teodorescu, Remus
Modular multilevel converter (MMC) is an emerging multilevel topology for high-voltage applications that has been developed in recent years. In this paper, the modeling and the control of MMCs are restated in terms of space vectors, which may allow a deeper understanding of the converter behavior....... As a result, a control scheme for three-phase MMCs based on the previous theoretical analysis is presented. Numerical simulations are used to test its feasibility....
The Aveston Copper and Kelly (ACK) Method has been routinely used in estimating the efficiency of the bond between the textile and cementitious matrix. This method however has a limited applicability due to the simplifying assumptions such as perfect bond. A numerical model for simulation of tensile behavior of reinforced cement-based composites is presented to capture the inefficiency of the bond mechanisms. In this approach the role of interface properties which are instrumental in the simu...
Highlights: • A CFD based model developed in ANSYS-FLUENT for simulating the distribution of hydrogen in the containment of a nuclear power plant during a severe accident is validated against four large-scale experiments. • The successive formation and mixing of a stratified gas-layer in experiments performed in the THAI and PANDA facilities are predicted well by the CFD model. • The pressure evolution and related condensation rate during different mixed convection flow conditions in the TOSQAN facility are predicted well by the CFD model. • The results give confidence in the general applicability of the CFD model and model settings. - Abstract: In the event of core degradation during a severe accident in water-cooled nuclear power plants (NPPs), large amounts of hydrogen are generated that may be released into the reactor containment. As the hydrogen mixes with the air in the containment, it can form a flammable mixture. Upon ignition it can damage relevant safety systems and put the integrity of the containment at risk. Despite the installation of mitigation measures, it has been recognized that the temporary existence of combustible or explosive gas clouds cannot be fully excluded during certain postulated accident scenarios. The distribution of hydrogen in the containment and mitigation of the risk are, therefore, important safety issues for NPPs. Complementary to lumped parameter code modelling, Computational Fluid Dynamics (CFD) modelling is needed for the detailed assessment of the hydrogen risk in the containment and for the optimal design of hydrogen mitigation systems in order to reduce this risk as far as possible. The CFD model applied by NRG makes use of the well-developed basic features of the commercial CFD package ANSYS-FLUENT. This general purpose CFD package is complemented with specific user-defined sub-models required to capture the relevant thermal-hydraulic phenomena in the containment during a severe accident as well as the effect of
This paper aims at constructing an emission source inversion model using a variational processing method and adaptive nudging scheme for the Community Multiscale Air Quality Model (CMAQ) based on satellite data to investigate the applicability of high resolution OMI (Ozone Monitoring Instrument) column concentration data for air quality forecasts over the North China. The results show a reasonable consistency and good correlation between the spatial distributions of NO2 from surface and OMI satellite measurements in both winter and summer. Such OMI products may be used to implement integrated variational analysis based on observation data on the ground. With linear and variational corrections made, the spatial distribution of OMI NO2 clearly revealed more localized distributing characteristics of NO2 concentration. With such information, emission sources in the southwest and southeast of North China are found to have greater impacts on air quality in Beijing. When the retrieved emission source inventory based on high-resolution OMI NO2 data was used, the coupled Weather Research Forecasting CMAQ model (WRF-CMAQ) performed significantly better in forecasting NO2 concentration level and its tendency as reflected by the more consistencies between the NO2 concentrations from surface observation and model result. In conclusion, satellite data are particularly important for simulating NO2 concentrations on urban and street-block scale. High-resolution OMI NO2 data are applicable for inversing NOx emission source inventory, assessing the regional pollution status and pollution control strategy, and improving the model forecasting results on urban scale.
You, Guohua; Zhao, Ying
More and more web servers adopt multi-core CPUs to improve performance because of the development of multi-core technology. However, web applications couldn't exploit the potential of multi-core web server efficiently because of traditional processing algorithm of requests and scheduling strategies of threads in O/S. In this paper, a new web-based application optimization model was proposed, which could classify and schedule the dynamic requests and static requests on scheduling core, and process the dynamic requests on the other cores. By this way, a simulation program, which is called SIM, was developed. Experiments have been done to validate the new model, and the results show that the new model can effectively improve the performance of multi-core web servers, and avoid the problems of ping-pong effect.
Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio
The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed. PMID:26054728
GAO Xi-feng; LIU Run; YAN Shu-wang
The buckling of submarine pipelines may occur due to the action of axial soil frictional force caused by relative movement of soil and pipeline,which is induced by the thermal and internal pressure.The likelihood of occurrence of this buckling phenomenon is largely determined by soil resistance.A series of large-scale model tests were carried out to facilitate the establishment of substantial data base for a variety of burial pipeline relationships.Based on the test data,nonlinear soil spring can be adopted to simulate the soil behavior during the pipeline movement.For uplift resistance,an ideal elasticity plasticity model is recommended in the case of H/D (depth-to-diameter ratio)＞5 and an elasticity softened model is recommended in the case of H/D≤5.The soil resistance along the pipeline axial direction can be simulated by an ideal elasticity plasticity model.The numerical analyzing results show that the capacity of pipeline against thermal buckling decreases with its initial imperfection enlargement and increases with the burial depth enhancement.
This thesis deals with synthesis of combined (nonlinear) model-based and (fuzzy logic) rule-based controllers, along with their applications to helicopter flight control problem. The synthesis involves superimposing two control techniques in order to meet both stability and performance objectives. One is model-based control technique, which is based on inversion of an approximate model of the real system. The other is rule-based control technique that adaptively cancels the inversion errors caused by the approximate model inversion. There are two major aspects of the research effort in this thesis. The first is the development of the adaptive rule-based (fuzzy logic) controllers. The linguistic rule weights and defuzzification output weights in the controllers are adapted for ultimate boundedness of the tracking errors. Numerical results from a helicopter flight control problem indicate improvement and demonstrate effectiveness of the control technique. The second aspect of this research work is the extension of the synthesis to account for control limits. In this thesis, a control saturation related rule-bank in conjunction with the adaptive fuzzy logic controller is designed to trade-off system performance for closed-loop stability when the tendency towards control amplitude and/or rate saturation is detected. Simulation results from both a fixed-wing aircraft trajectory control problem and a helicopter flight control problem show the effectiveness of the synthesis method and the resulting controller in avoiding control saturations.
Highlights: ► The study focuses on elliptic blending near-wall models. ► Models are compared on 2- and 3-dimensional separating flows. ► Conclusions are ambiguous on 2-d flows. ► Predictive superiority of Reynolds stress models over eddy viscosity model appear on 3-d flows. - Abstract: This paper considers the application of four Reynolds-Averaged Navier Stokes (RANS) models to a range of progressively complex test cases, exhibiting both 2-d and 3-d flow separation. Two Eddy Viscosity Models (EVM) and two Reynolds Stress Transport Models (RSM) are employed, of which two (one in each category) are based on elliptic blending formulations. By both reviewing the conclusions of previous studies, and from the present calculations, this study aims at gaining more insight into the importance of two modelling features for these flows: the usage of turbulence anisotropy resolving schemes, and the near-wall limiting behaviour. In general the anisotropy and near wall treatment offered by both elliptic blending models is observed to offer some improvement over other models tested, although this is not always the case for the 2-d flows, where (as ever) a single “best candidate” model does not emerge.
Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently, greater emphasis has been placed on GIS (geographical information system)to deal with the marine information. The GIS has shown great success for terrestrial applications in the last decades, but its use in marine fields has been far more restricted. One of the main reasons is that most of the GIS systems or their data models are designed for land applications. They cannot do well with the nature of the marine environment and for the marine information. And this becomes a fundamental challenge to the traditional GIS and its data structure. This work designed a data model,the raster-based spatio-temporal hierarchical data model (RSHDM), for the marine information system, or for the knowledge discovery from spatio-temporal data, which bases itself on the nature of the marine data and overcomes the shortages of the current spatio-temporal models when they are used in the field. As an experiment, the marine fishery data warehouse (FDW) for marine fishery management was set up, which was based on the RSHDM. The experiment proved that the RSHDM can do well with the data and can extract easily the aggregations that the management needs at different levels.
For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.
史宏彦; 谢定义; 白琳
The stress vector-based constitutive model for cohesionless soil, proposed by SHI Hong-yan et al., was applied to analyze the deformation behaviors of materials subjected to various stress paths. The result of analysis shows that the constitutive model can capture well the main deformation behavior of cohesionless soil, such as stress-strain nonlinearity,hardening property, dilatancy , stress path dependency, non- coaxiality between the principal stress and the principal strain increment directions, and the coupling of mean effective and deviatoric stress with deformation. In addition, the model can also take into account the rotation of principal stress axes and the influence of intermediate principal stress on deformation and strength of soil simultaneously. The excellent agreement between the predicted and measured behavior indicates the comprehensive applicability of the model.
Shoukri Mohamed M
Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0
Marino, Dale J
Abstract Physiologically based pharmacokinetic (PBPK) models are mathematical descriptions depicting the relationship between external exposure and internal dose. These models have found great utility for interspecies extrapolation. However, specialized computer software packages, which are not widely distributed, have typically been used for model development and utilization. A few physiological models have been reported using more widely available software packages (e.g., Microsoft Excel), but these tend to include less complex processes and dose metrics. To ascertain the capability of Microsoft Excel and Visual Basis for Applications (VBA) for PBPK modeling, models for styrene, vinyl chloride, and methylene chloride were coded in Advanced Continuous Simulation Language (ACSL), Excel, and VBA, and simulation results were compared. For styrene, differences between ACSL and Excel or VBA compartment concentrations and rates of change were less than +/-7.5E-10 using the same numerical integration technique and time step. Differences using VBA fixed step or ACSL Gear's methods were generally Excel and VBA PBPK model dose metrics differed by no more than -0.013% or -0.23%, respectively, from ACSL results. These differences are likely attributable to different step sizes rather than different numerical integration techniques. These results indicate that Microsoft Excel and VBA can be useful tools for utilizing PBPK models, and given the availability of these software programs, it is hoped that this effort will help facilitate the use and investigation of PBPK modeling. PMID:20021074
Kafatos, M.; Boybeyi, Z.; Cervone, G.; di, L.; Sun, D.; Yang, C.; Yang, R.
The ever-growing large volumes of Earth system science data, collected by Earth observing platforms, in situ stations and as model output data, are increasingly being used by discipline scientists and by wider classes of users. In particular, applications of Earth system science data to environmental and hazards as well as other national applications, require tailored or specialized data, as well as web-based tools and infrastructure. The latter are driven by applications and usage drivers which include ease of access, visualization of complex data, ease of producing value-added data, GIS and open source analysis usage, metadata, etc. Here we present different aspects of such web-based services and access, and discuss several applications in the hazards and environmental areas, including earthquake signatures and observations and model runs of hurricanes. Examples and lessons learned from the consortium Mid-Atlantic Geospatial Information Consortium will be presented. We discuss a NASA-funded, open source on-line data analysis system that is being applied to climate studies for the ESIP Federation. Since enhanced, this project and the next-generation Metadata Integrated Data Analysis System allow users not only to identify data but also to generate new data products on-the-fly. The functionalities extend from limited predefined functions, to sophisticated functions described by general-purposed GrADS (Grid Analysis and Display System) commands. The Federation system also allows third party data products to be combined with local data. Software component are available for converting the output from MIDAS (OPenDAP) into OGC compatible software. The on-going Grid efforts at CEOSR and LAITS in the School of Computational Sciences (SCS) include enhancing the functions of Globus to provide support for a geospatial system so the system can share the computing power to handle problems with different peak access times and improve the stability and flexibility of a rapid
A generalized theoretical analysis for amplification mechanism in the planar-type Cherenkov laser is given. An electron is represented to be a material wave having temporal and spatial varying phases with finite spreading length. Interaction between the electrons and the electromagnetic (EM) wave is analyzed by counting the quantum statistical properties. The interaction mechanism is classified into the Velocity and Density Modulation (VDM) model and the Energy Level Transition (ELT) model basing on the relation between the wavelength of the EM wave and the electron spreading length. The VDM model is applicable when the wavelength of the EM wave is longer than the electron spreading length as in the microwave region. The dynamic equation of the electron, which is popularly used in the classical Newtonian mechanics, has been derived from the quantum mechanical Schrödinger equation. The amplification of the EM wave can be explained basing on the bunching effect of the electron density in the electron beam. The amplification gain and whose dispersion relation with respect to the electron velocity is given in this paper. On the other hand, the ELT model is applicable for the case that the wavelength of the EM wave is shorter than the electron spreading length as in the optical region. The dynamics of the electron is explained to be caused by the electron transition between different energy levels. The amplification gain and whose dispersion relation with respect to the electron acceleration voltage was derived on the basis of the quantum mechanical density matrix
Carvalho Resende, T.; Balan, T.; Abed-Meraim, F.; Bouvier, S.; Sablin, S.-S.
With a view to environmental, economic and safety concerns, car manufacturers need to design lighter and safer vehicles in ever shorter development times. In recent years, High Strength Steels (HSS) like Interstitial Free (IF) steels which have higher ratios of yield strength to elastic modulus, are increasingly used for sheet metal parts in automotive industry to meet the demands. Moreover, the application of sheet metal forming simulations has proven to be beneficial to reduce tool costs in the design stage and to optimize current processes. The Finite Element Method (FEM) is quite successful to simulate metal forming processes but accuracy largely depends on the quality of the material properties provided as input to the material model. Common phenomenological models roughly consist in the fitting of functions on experimental results and do not provide any predictive character for different metals from the same grade. Therefore, the use of accurate plasticity models based on physics would increase predictive capability, reduce parameter identification cost and allow for robust and time-effective finite element simulations. For this purpose, a 3D physically based model at large strain with dislocation density evolution approach was presented in IDDRG2009 by the authors . This model allows the description of work-hardening's behavior for different loading paths (i.e. uni-axial tensile, simple shear and Bauschinger tests) taking into account several data from microstructure (i.e. grain size, texture, etc…). The originality of this model consists in the introduction of microstructure data in a classical phenomenological model in order to achieve work-hardening's predictive character for different metals from the same grade. Indeed, thanks to a microstructure parameter set for an Interstitial Free steel, it is possible to describe work-hardening behavior for different loading paths of other IF steels by only changing the mean grain size and the chemical
Jech, Markus; Sharma, Prateek; Tyaginov, Stanislav; Rudolf, Florian; Grasser, Tibor
We study the limits of the applicability of a drift-diffusion (DD) based model for hot-carrier degradation (HCD). In this approach the rigorous but computationally expensive solution of the Boltzmann transport equation is replaced by an analytic expression for the carrier energy distribution function. On the one hand, we already showed that the simplified version of our HCD model is quite successful for LDMOS devices. On the other hand, hot carrier degradation models based on the drift-diffusion and energy transport schemes were shown to fail for planar MOSFETs with gate lengths of 0.5-2.0 µm. To investigate the limits of validity of the DD-based HCD model, we use planar nMOSFETs of an identical topology but with different gate lengths of 2.0, 1.5, and 1.0 µm. We show that, although the model is able to adequately represent the linear and saturation drain current changes in the 2.0 µm transistor, it starts to fail for gate lengths shorter than 1.5 µm and becomes completely inadequate for the 1.0 µm device.
Full Text Available Recently, Artificial Neural Networks (ANN, which is mathematical modelingtools inspired by the properties of the biological neural system, has been typically used inthe studies of hydrological time series modeling. These modeling studies generally includethe standart feed forward backpropagation (FFBP algorithms such as gradient-descent,gradient-descent with momentum rate and, conjugate gradient etc. As the standart FFBPalgorithms have some disadvantages relating to the time requirement and slowconvergency in training, Newton and Levenberg-Marquardt algorithms, which arealternative approaches to standart FFBP algorithms, were improved and also used in theapplications. In this study, an application of Levenberg-Marquardt algorithm based ANN(LM-ANN for the modeling of monthly inflows of Demirkopru Dam, which is located inthe Gediz basin, was presented. The LM-ANN results were also compared with gradientdescentwith momentum rate algorithm based FFBP model (GDM-ANN. When thestatistics of the long-term and also seasonal-term outputs are compared, it can be seen thatthe LM-ANN model that has been developed, is more sensitive for prediction of theinflows. In addition, LM-ANN approach can be used for modeling of other hydrologicalcomponents in terms of a rapid assessment and its robustness.
Diggle, Peter J
Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.
Gaurav Singh Thakur; Anubhav Gupta; Ankur Bhardwaj; Biju R Mohan
This paper uses a case based study – “product sales estimation” on real-time data to help us understand the applicability of linear and non-linear models in machine learning and data mining. A systematic approach has been used here to address the given problem statement of sales estimation for a particular set of products in multiple categories by applying both linear and non-linear machine learning techniques on a data set of selected features from the original data set. Feature ...
MA Yuping; WANG Shili; ZHANG Li; HOU Yingyu; ZHUANG Liwei; WANG Futang
Accurate crop growth monitoring and yield forecasting are significant to the food security and the sus tainable development of agriculture. Crop yield estimation by remote sensing and crop growth simulation models have highly potential application in crop growth monitoring and yield forecasting. However, both of them have limitations in mechanism and regional application, respectively. Therefore, approach and methodology study on the combination of remote sensing data and crop growth simulation models are con cerned by many researchers. In this paper, adjusted and regionalized WOFOST (World Food Study) in North China and Scattering by Arbitrarily Inclined Leaves-a model of leaf optical PROperties SPECTra (SAIL-PROSFPECT) were coupled through LAI to simulate Soil Adjusted Vegetation Index (SAVI) of crop canopy, by which crop model was re-initialized by minimizing differences between simulated and synthesized SAVI from remote sensing data using an optimization software (FSEOPT). Thus, a regional remote-sensing crop-simulation-framework-model (WSPFRS) was established under potential production level (optimal soil water condition). The results were as follows: after re-initializing regional emergence date by using remote sensing data, anthesis, and maturity dates simulated by WSPFRS model were more close to measured values than simulated results of WOFOST; by re-initializing regional biomass weight at turn-green stage, the spa tial distribution of simulated storage organ weight was more consistent with measured yields and the area with high values was nearly consistent with actual high yield area. This research is a basis for developing regional crop model in water stress production level based on remote sensing data.
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model
Full Text Available This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also
Land use changes, urbanisation and infrastructure developments in particular, cause fragmentation of natural habitats and threaten biodiversity. Tools and measures must be adapted to assess and remedy the potential effects on biodiversity caused by human activities and developments. Within physical planning, environmental impact assessment (EIA) and strategic environmental assessment (SEA) play important roles in the prediction and assessment of biodiversity-related impacts from planned developments. However, adapted prediction tools to forecast and quantify potential impacts on biodiversity components are lacking. This study tested and compared four different GIS-based habitat models and assessed their relevance for applications in environmental assessment. The models were implemented in the Stockholm region in central Sweden and applied to data on the crested tit (Parus cristatus), a sedentary bird species of coniferous forest. All four models performed well and allowed the distribution of suitable habitats for the crested tit in the Stockholm region to be predicted. The models were also used to predict and quantify habitat loss for two regional development scenarios. The study highlighted the importance of model selection in impact prediction. Criteria that are relevant for the choice of model for predicting impacts on biodiversity were identified and discussed. Finally, the importance of environmental assessment for the preservation of biodiversity within the general frame of biodiversity conservation is emphasised.
Khalili, N; Naguib, H E; Kwon, R H
Human intervention can be replaced through the development of tools resulting from utilization of sensing devices possessing a wide range of applications including humanoid robots or remote and minimally invasive surgeries. Similar to the five human senses, sensors interface with their surroundings to stimulate a suitable response or action. The sense of touch which arises in human skin is among the most challenging senses to emulate due to its ultra high sensitivity. This has brought forth novel challenging issues to consider in the field of biomimetic robotics. In this work, using a multiphase reaction, a polypyrrole (PPy) based hydrogel is developed as a resistive type pressure sensor with an intrinsically elastic microstructure stemming from three dimensional hollow spheres. It is shown that the electrical conductivity of the fabricated PPy based piezoresistive sensors is enhanced as a result of adding conductive fillers and therefore, endowing the sensors with a higher sensitivity. A semi-analytical constriction resistance based model accounting for the real contact area between the PPy hydrogel sensors and the electrode along with the dependency of the contact resistance change on the applied load is developed. The model is then solved using a Monte Carlo technique and its corresponding sensitivity is obtained. Comparing the results with their experimental counterparts, the proposed modeling methodology offers a good tracking ability. PMID:27035514
Reliable reactor control is important to reactor safety, both in terrestrial and space systems. For a space system, where the time for communication to Earth is significant, autonomous control is imperative. Based on feedback from reactor diagnostics, a controller must be able to automatically adjust to changes in reactor temperature and power level to maintain nominal operation without user intervention. Model-based predictive control (MBPC) (Clarke 1994; Morari 1994) is investigated as a potential control methodology for reactor start-up and transient operation in the presence of an external source. Bragg-Sitton and Holloway (2004) assessed the applicability of MBPC to reactor start-up from a cold, zero-power condition in the presence of a time-varying external radiation source, where large fluctuations in the external radiation source can significantly impact a reactor during start-up operations. The MBPC algorithm applied the point kinetics model to describe the reactor dynamics, using a single group of delayed neutrons; initial application considered a fast neutron lifetime (10-3 sec) to simplify calculations during initial controller analysis. The present study will more accurately specify the dynamics of a fast reactor, using a more appropriate fast neutron lifetime (10-7 sec) than in the previous work. Controller stability will also be assessed by carefully considering the dependencies of each component in the defined cost (objective) function and its subsequent effect on the selected 'optimal' control maneuvers
Full Text Available Power system reliability has been a widespread concern. With the continuous development of the electricity market, there has been a serious conflict, which between social obligations and necessary requirement. The so-called social obligation is that the power industry must ensure the power system operation reliability; the so-called necessary requirement is that the power industry must solve the power market operation economics. The power system operation reliability is serious challenge, so we must establish a new mechanism adapting to the electricity market, and ensure the power system operation reliability. In this paper, we have researched the model and method based on the principal-agent mechanism from three aspects, which are the reliability principal-agent model starting point, principal-agent model analysis and design, and information transparency level impact analysis. On this basis, we proposed the reliability management model based on the principal-agent mechanism, and carried out example analysis. The example analysis shows that through establishing the principal-agent relationship between power enterprises and the user, the entire power system reliability resources would be more efficient configuration to further enhance the entire society electricity reliability benefits. Further evidence that the principal-agent mechanism is scientific and applicability in the power system reliability study.
Time series prediction has been successfully used in several application areas, such as meteorological forecasting, market prediction, network traffic forecasting, etc., and a number of techniques have been developed for modeling and predicting time series. In the traditional exponential smoothing method, a fixed weight is assigned to data history, and the trend changes of time series are ignored. In this paper, an uncertainty reasoning method, based on cloud model, is employed in time series prediction, which uses cloud logic controller to adjust the smoothing coefficient of the simple exponential smoothing method dynamically to fit the current trend of the time series. The validity of this solution was proved by experiments on various data sets.
Time series prediction has been successfully used in several application areas, such as meteoro-logical forecasting, market prediction, network traffic forecasting, etc. , and a number of techniques have been developed for modeling and predicting time series. In the traditional exponential smoothing method, a fixed weight is assigned to data history, and the trend changes of time series are ignored. In this paper, an uncertainty reasoning method, based on cloud model, is employed in time series prediction, which uses cloud logic controller to adjust the smoothing coefficient of the simple exponential smoothing method dynamically to fit the current trend of the time series. The validity of this solution was proved by experiments on various data sets.
Global change study is an interdisciplinary and comprehensive research activity with international cooperation, arising in 1980s, with the largest scopes. The interaction between land use and cover change, as a research field with the crossing of natural science and social science, has become one of core subjects of global change study as well as the front edge and hot point of it. It is necessary to develop research on land use and cover change in urbanization process and build an analog model of urbanization to carry out description, simulation and analysis on dynamic behaviors in urban development change as well as to understand basic characteristics and rules of urbanization process. This has positive practical and theoretical significance for formulating urban and regional sustainable development strategy. The effect of urbanization on land use and cover change is mainly embodied in the change of quantity structure and space structure of urban space, and LUCC model in urbanization process has been an important research subject of urban geography and urban planning. In this paper, based upon previous research achievements, the writer systematically analyzes the research on land use/cover change in urbanization process with the theories of complexity science research and intelligent computation; builds a model for simulating and forecasting dynamic evolution of urban land use and cover change, on the basis of cellular automation model of complexity science research method and multi-agent theory; expands Markov model, traditional CA model and Agent model, introduces complexity science research theory and intelligent computation theory into LUCC research model to build intelligent computation-based LUCC model for analog research on land use and cover change in urbanization research, and performs case research. The concrete contents are as follows: 1. Complexity of LUCC research in urbanization process. Analyze urbanization process in combination with the contents
Le, A.; Pricope, N. G.
Projections indicate that increasing population density, food production, and urbanization in conjunction with changing climate conditions will place stress on water resource availability. As a result, a holistic understanding of current and future water resource distribution is necessary for creating strategies to identify the most sustainable means of accessing this resource. Currently, most water resource management strategies rely on the application of global climate predictions to physically based hydrologic models to understand potential changes in water availability. However, the need to focus on understanding community-level social behaviors that determine individual water usage is becoming increasingly evident, as predictions derived only from hydrologic models cannot accurately represent the coevolution of basin hydrology and human water and land usage. Models that are better equipped to represent the complexity and heterogeneity of human systems and satellite-derived products in place of or in conjunction with historic data significantly improve preexisting hydrologic model accuracy and application outcomes. We used a novel agent-based sociotechnical model that combines the Soil and Water Assessment Tool (SWAT) and Agent Analyst and applied it in the Nzoia Basin, an area in western Kenya that is becoming rapidly urbanized and industrialized. Informed by a combination of satellite-derived products and over 150 household surveys, the combined sociotechnical model provided unique insight into how populations self-organize and make decisions based on water availability. In addition, the model depicted how population organization and current management alter water availability currently and in the future.
Gaurav Singh Thakur
Full Text Available This paper uses a case based study – “product sales estimation” on real-time data to help us understand the applicability of linear and non-linear models in machine learning and data mining. A systematic approach has been used here to address the given problem statement of sales estimation for a particular set of products in multiple categories by applying both linear and non-linear machine learning techniques on a data set of selected features from the original data set. Feature selection is a process that reduces the dimensionality of the data set by excluding those features which contribute minimal to the prediction of the dependent variable. The next step in this process is training the model that is done using multiple techniques from linear & non-linear domains, one of the best ones in their respective areas. Data Remodeling has then been done to extract new features from the data set by changing the structure of the dataset & the performance of the models is checked again. Data Remodeling often plays a very crucial and important role in boosting classifier accuracies by changing the properties of the given dataset. We then try to explore and analyze the various reasons due to which one model performs better than the other & hence try and develop an understanding about the applicability of linear & non-linear machine learning models. The target mentioned above being our primary goal, we also aim to find the classifier with the best possible accuracy for product sales estimation in the given scenario.
With the wide spread use of energy storage systems, battery state of health (SOH) monitoring has become one of the most crucial challenges in power and energy research, as SOH significantly affects the performance and life cycle of batteries as well as the systems they are interacting with. Identifying the SOH and adapting of the battery energy/power management system accordingly are thus two important challenges for applications such as electric vehicles, smart buildings and hybrid power systems. This dissertation focuses on the identification of lithium ion battery capacity fading, and proposes an on-board implementable model parametrization and adaptation framework for SOH monitoring. Both parametric and non-parametric approaches that are based on kernel functions are explored for the modeling of battery charging data and aging signature extraction. A unified parametric open circuit voltage model is first developed to improve the accuracy of battery state estimation. Several analytical and numerical methods are then investigated for the non-parametric modeling of battery data, among which the support vector regression (SVR) algorithm is shown to be the most robust and consistent approach with respect to data sizes and ranges. For data collected on LiFePO 4 cells, it is shown that the model developed with the SVR approach is able to predict the battery capacity fading with less than 2% error. Moreover, motivated by the initial success of applying kernel based modeling methods for battery SOH monitoring, this dissertation further exploits the parametric SVR representation for real-time battery characterization supported by test data. Through the study of the invariant properties of the support vectors, a kernel based model parametrization and adaptation framework is developed. The high dimensional optimization problem in the learning algorithm could be reformulated as a parameter estimation problem, that can be solved by standard estimation algorithms such as the
Full Text Available In modern market conditions, sustainable and effective management of main manufacturers, suppliers and customer relationship is a necessity for competitiveness. Suppliers must satisfy customers’ expectations such as cost minimization, quality maximization, improved flexibility and achieved deadlines; which is also required for systematic management of products, work and environmental safety. The supplier selection process is consistently getting more complicated by the effect of increasing amount of suppliers and supplier selection criteria. Supplier selection decisions which take an important role in efficient supplier management will be more consistent by the application of decision making models which integrate the quantitative and qualitative evaluation factors. In this study, a dynamic process is designed and modeled for supplier selection. For this purpose, evaluation criteria are established according to the Balanced Scorecard perspectives, system sustainability and safety requirements. Fuzzy Analytic Hierarchy Process method is used for evaluating the importance of supplier selection criteria. A utility range-based interactive group decision making method is used for the selection of the best supplier. In order to test the proposed model, a representative from airport operation sector is selected. Finally, it is revealed that the application of the proposed model generates consistent results for supplier selection decisions.
Tajima, Yoshiyuki; Onisawa, Takehisa
In this chapter, the learning algorithm ME-FPRL is discussed. And it applied to the pursuit target task. Application results show that the ME-FPRL is more efficient than a RL or Modelbased RL. As a result, ME-FPRL is found to be able to apply to practical tasks. Our future work is constructing more an efficient learning system by using some advice and communication.
Chenine, Moustafa; Nordström, Lars
Phasor-based wide-area monitoring and control (WAMC) systems are becoming a reality with increased research, development, and deployments. Many potential control applications based on these systems are being proposed and researched. These applications are either local applications using data from one or a few phasor measurement units (PMUs) or centralized utilizing data from several PMUs. An aspect of these systems, which is less well researched, is the WAMC system's dependence on high-perfor...
Applicability of a model to estimate radiofrequency electromagnetic field (RF-EMF) strength in households from mobile phone base stations was evaluated with technical data of mobile phone base stations available from the German Net Agency, and dosimetric measurements, performed in an epidemiological study. Estimated exposure and exposure measured with dosemeters in 1322 participating households were compared. For that purpose, the upper 10. percentiles of both outcomes were defined as the 'higher exposed' groups. To assess the agreement of the defined 'higher' exposed groups, kappa coefficient, sensitivity and specificity were calculated. The present results show only a weak agreement of calculations and measurements (kappa values between -0.03 and 0.28, sensitivity between 7.1 and 34.6). Only in some of the sub-analyses, a higher agreement was found, e.g. when measured instead of interpolated geo-coordinates were used to calculate the distance between households and base stations, which is one important parameter in modelling exposure. During the development of the exposure model, more precise input data were available for its internal validation, which yielded kappa values between 0.41 and 0.68 and sensitivity between 55 and 76 for different types of housing areas. Contrary to this, the calculation of exposure - on the basis of the available imprecise data from the epidemiological study - is associated with a relatively high degree of uncertainty. Thus, the model can only be applied in epidemiological studies, when the uncertainty of the input data is considerably reduced. Otherwise, the use of dosemeters to determine the exposure from RF-EMF in epidemiological studies is recommended. (authors)
Sun, Z.; Wang, Q.; Matsushita, B.; Fukushima, T.; Ouyang, Z.; Gebremichael, M.
Remote sensing (RS) has been considered as the most promising tool for evapotranspiration (ET) estimations from local, regional to global scales. Many studies have been conducted to estimated ET using RS data, however, most of them are based partially on ground observations. This limits the applications of these algorithms when the necessary data are unavailable. Some other algorithms can generate real-time ET solely using remote sensing data, but lack mechanistic realism. In our study, we developed a new dual-source Simple Remote Sensing EvapoTranspiration model (Sim-ReSET) based only on RS data. One merit of this model is that the calculation of aerodynamic resistance can be avoided by means of a reference dry bare soil and an assumption that wind speed at the upper boundary of atmospheric surface layer is homogenous, but the aerodynamic characters are still considered by means of canopy height. The other merit is that all inputs (net radiation, soil heat flux, canopy height, variables related to land surface temperature) can be potentially obtained from remote sensing data, which allows obtaining regular RS-driven ET product. For the purposes of sensitivity analysis and performance evaluation of the Sim-ReSET model without the effect of potential uncertainties and errors from remote sensing data, the Sim-ReSET model was tested only using intensive ground observations at the Yucheng ecological station in the North China Plain from 2006 to 2008. Results show that the model has a good performance for instantaneous ET estimations with a mean absolute difference (MAD) of 34.27 W/m2 and a root mean square error (RMSE) of 41.84 W/m2 under neutral or near-neutral atmospheric conditions. On 12 cloudless days, the MAD of daily ET accumulated from instantaneous estimations is 0.26 mm/day, and the RMSE is 0.30 mm/day. In our study, we mapped Asian 16-day ET from 2000 to 2009 only using MODIS land data products based on the Sim-ReSET model. Then, the obtained ET product was
Barnawi, Ahmed; Qureshi, M Rizwan Jameel; Khan, Asif Irshad
Universities and Institutions these days' deals with issues related to with assessment of large number of students. Various evaluation methods have been adopted by examiners in different institutions to examining the ability of an individual, starting from manual means of using paper and pencil to electronic, from oral to written, practical to theoretical and many others. There is a need to expedite the process of examination in order to meet the increasing enrolment of students at the universities and institutes. Sip Based Mass Mobile Examination System (SiBMMES) expedites the examination process by automating various activities in an examination such as exam paper setting, Scheduling and allocating examination time and evaluation (auto-grading for objective questions) etc. SiBMMES uses the IP Multimedia Subsystem (IMS) that is an IP communications framework providing an environment for the rapid development of innovative and reusable services Session Initial Protocol (SIP) is a signalling (request-response)...
Barnawi, Ahmed; Qureshi, M Rizwan Jameel; Khan, Asif Irshad; 10.5121/ijsea.2012.3107
Universities and Institutions these days' deals with issues related to with assessment of large number of students. Various evaluation methods have been adopted by examiners in different institutions to examining the ability of an individual, starting from manual means of using paper and pencil to electronic, from oral to written, practical to theoretical and many others. There is a need to expedite the process of examination in order to meet the increasing enrolment of students at the universities and institutes. Sip Based Mass Mobile Examination System (SiBMMES) expedites the examination process by automating various activities in an examination such as exam paper setting, Scheduling and allocating examination time and evaluation (auto-grading for objective questions) etc. SiBMMES uses the IP Multimedia Subsystem (IMS) that is an IP communications framework providing an environment for the rapid development of innovative and reusable services Session Initial Protocol (SIP) is a signalling (request-response)...
Full Text Available Zearalenone (ZEA, a fungal mycotoxin, and its metabolite zeranol (ZAL are known estrogen agonists in mammals, and are found as contaminants in food. Zeranol, which is more potent than ZEA and comparable in potency to estradiol, is also added as a growth additive in beef in the US and Canada. This article presents the development and application of a Physiologically-Based Toxicokinetic (PBTK model for ZEA and ZAL and their primary metabolites, zearalenol, zearalanone, and their conjugated glucuronides, for rats and for human subjects. The PBTK modeling study explicitly simulates critical metabolic pathways in the gastrointestinal and hepatic systems. Metabolic events such as dehydrogenation and glucuronidation of the chemicals, which have direct effects on the accumulation and elimination of the toxic compounds, have been quantified. The PBTK model considers urinary and fecal excretion and biliary recirculation and compares the predicted biomarkers of blood, urinary and fecal concentrations with published in vivo measurements in rats and human subjects. Additionally, the toxicokinetic model has been coupled with a novel probabilistic dietary exposure model and applied to the Jersey Girl Study (JGS, which involved measurement of mycoestrogens as urinary biomarkers, in a cohort of young girls in New Jersey, USA. A probabilistic exposure characterization for the study population has been conducted and the predicted urinary concentrations have been compared to measurements considering inter-individual physiological and dietary variability. The in vivo measurements from the JGS fall within the high and low predicted distributions of biomarker values corresponding to dietary exposure estimates calculated by the probabilistic modeling system. The work described here is the first of its kind to present a comprehensive framework developing estimates of potential exposures to mycotoxins and linking them with biologically relevant doses and biomarker
Mukherjee, Dwaipayan; Royce, Steven G.; Alexander, Jocelyn A.; Buckley, Brian; Isukapalli, Sastry S.; Bandera, Elisa V.; Zarbl, Helmut; Georgopoulos, Panos G.
Zearalenone (ZEA), a fungal mycotoxin, and its metabolite zeranol (ZAL) are known estrogen agonists in mammals, and are found as contaminants in food. Zeranol, which is more potent than ZEA and comparable in potency to estradiol, is also added as a growth additive in beef in the US and Canada. This article presents the development and application of a Physiologically-Based Toxicokinetic (PBTK) model for ZEA and ZAL and their primary metabolites, zearalenol, zearalanone, and their conjugated glucuronides, for rats and for human subjects. The PBTK modeling study explicitly simulates critical metabolic pathways in the gastrointestinal and hepatic systems. Metabolic events such as dehydrogenation and glucuronidation of the chemicals, which have direct effects on the accumulation and elimination of the toxic compounds, have been quantified. The PBTK model considers urinary and fecal excretion and biliary recirculation and compares the predicted biomarkers of blood, urinary and fecal concentrations with published in vivo measurements in rats and human subjects. Additionally, the toxicokinetic model has been coupled with a novel probabilistic dietary exposure model and applied to the Jersey Girl Study (JGS), which involved measurement of mycoestrogens as urinary biomarkers, in a cohort of young girls in New Jersey, USA. A probabilistic exposure characterization for the study population has been conducted and the predicted urinary concentrations have been compared to measurements considering inter-individual physiological and dietary variability. The in vivo measurements from the JGS fall within the high and low predicted distributions of biomarker values corresponding to dietary exposure estimates calculated by the probabilistic modeling system. The work described here is the first of its kind to present a comprehensive framework developing estimates of potential exposures to mycotoxins and linking them with biologically relevant doses and biomarker measurements
OUYANG Zi-sheng; LIAO Hui; YANG Xiang-qun
This paper is concerned with the statistical modeling of the dependence structure of multivariate financial data using the copula, and the application of copula functions in VaR valuation. After the introduction of the pure copula method and the maximum and minimum mixture copula method, authors present a new algorithm based on the more generalized mixture copula functions and the dependence measure, and apply the method to the portfolio of Shanghai stock composite index and Shenzhen stock component index. Comparing with the results from various methods, one can find that the mixture copula method is better than the pure Gaussia copula method and the maximum and minimum mixture copula method on different VaR level.
This paper presents a synthetic analysis method for multi-sourced geological data from geographic information system (GIS). In the previous practices of mineral resources prediction, a usually adopted methodology has been statistical analysis of cells delimitated based on thoughts of random sampiing. That might lead to insufficient utilization of local spatial information, for a cell is treated as a point without internal structure. We now take "cell clusters", i. e. , spatial associations of cells, as basic units of statistics, thus the spatial configuration information of geological variables is easier to be detected and utilized, and the accuracy and reliability of prediction are improved. We build a linear multi-discriminating model for the clusters via genetic algorithm. Both the right-judgment rates and the in-class vs. between-class distance ratios are considered to form the evolutional adaptive values of the population. An application of the method in gold mineral resources prediction in east Xinjiang, China is presented.
Khalili, Nazanin; Naguib, Hani E.; Kwon, Roy H.
Human intervention can be replaced through development of tools resulted from utilizing sensing devices possessing a wide range of applications including humanoid robots or remote and minimally invasive surgeries. Similar to the five human senses, sensors interface with their surroundings to stimulate a suitable response or action. The sense of touch which arises in human skin is among the most challenging senses to emulate due to its ultra high sensitivity. This has brought forth novel challenging issues to consider in the field of biomimetic robotics. In this work, using a multiphase reaction, a polypyrrole (PPy) based hydrogel is developed as a resistive type pressure sensor with an intrinsically elastic microstructure stemming from three dimensional hollow spheres. Furthermore, a semi-analytical constriction resistance model accounting for the real contact area between the PPy hydrogel sensors and the electrode along with the dependency of the contact resistance change on the applied load is developed. The model is then solved using a Monte Carlo technique and the sensitivity of the sensor is obtained. The experimental results showed the good tracking ability of the proposed model.
Matthews, RB; Gilbert, NG; Roach, A.; Polhill, JG; Gotts, NM
Agent-based modelling is an approach that has been receiving attention by the land use modelling community in recent years, mainly because it offers a way of incorporating the influence of human decision-making on land use in a mechanistic, formal, and spatially explicit way, taking into account social interaction, adaptation, and decision-making at different levels. Specific advantages of agent-based models include their ability to model individual decision-making entities and their interact...
National Oceanic and Atmospheric Administration, Department of Commerce — This project is developing food web models for ecosystem-based management applications in Puget Sound. It is primarily being done by NMFS FTEs and contractors, in...
Li, Jing-wen; Jiang, Jian-wu; Zhou, Song; Yin, Shou-qiang
Use CityEngine as the 3D modeling platform, research on urban rapid 3D modeling technology based on the CGA(Computer Generated Architectur) rule , solved the problem of the rapid creation of urban 3D model in large scenes , and research on building texture processing and 3D model optimization techniques based on CGA rule , using component modeling method , solved the problem of texture distortion and model redundancy in the traditional fast modeling 3D model , and development of a three-dimensional view and analysis system based on ArcGIS Engine , realization of 3D model query , distance measurement , specific path flight , 3D marking , Scene export,etc.
Nampak, Haleh; Pradhan, Biswajeet; Manap, Mohammad Abd
The objective of this paper is to exploit potential application of an evidential belief function (EBF) model for spatial prediction of groundwater productivity at Langat basin area, Malaysia using geographic information system (GIS) technique. About 125 groundwater yield data were collected from well locations. Subsequently, the groundwater yield was divided into high (⩾11 m3/h) and low yields (validation purpose. To perform cross validation, the frequency ratio (FR) approach was applied into remaining groundwater wells with low yield to show the spatial correlation between the low potential zones of groundwater productivity. A total of twelve groundwater conditioning factors that affect the storage of groundwater occurrences were derived from various data sources such as satellite based imagery, topographic maps and associated database. Those twelve groundwater conditioning factors are elevation, slope, curvature, stream power index (SPI), topographic wetness index (TWI), drainage density, lithology, lineament density, land use, normalized difference vegetation index (NDVI), soil and rainfall. Subsequently, the Dempster-Shafer theory of evidence model was applied to prepare the groundwater potential map. Finally, the result of groundwater potential map derived from belief map was validated using testing data. Furthermore, to compare the performance of the EBF result, logistic regression model was applied. The success-rate and prediction-rate curves were computed to estimate the efficiency of the employed EBF model compared to LR method. The validation results demonstrated that the success-rate for EBF and LR methods were 83% and 82% respectively. The area under the curve for prediction-rate of EBF and LR methods were calculated 78% and 72% respectively. The outputs achieved from the current research proved the efficiency of EBF in groundwater potential mapping.
Ship detection and classification with space-borne SAR has many potential applications within the maritime surveillance, fishery activity management, monitoring ship traffic, and military security. While ship detection techniques with SAR imagery are well established, ship classification is still an open issue. One of the main reasons may be ascribed to the difficulties on acquiring the required quantities of real data of vessels under different observation and environmental conditions with precise ground truth. Therefore, simulation of SAR images with high scenario flexibility and reasonable computation costs is compulsory for ship classification algorithms development. However, the simulation of SAR imagery of ship over sea surface is challenging. Though great efforts have been devoted to tackle this difficult problem, it is far from being conquered. This paper proposes a novel scheme for SAR imagery simulation of ship over sea surface. The simulation is implemented based on high frequency electromagnetic calculations methods of PO, MEC, PTD and GO. SAR imagery of sea clutter is modelled by the representative K-distribution clutter model. Then, the simulated SAR imagery of ship can be produced by inserting the simulated SAR imagery chips of ship into the SAR imagery of sea clutter. The proposed scheme has been validated with canonical and complex ship targets over a typical sea scene
Based on the Danckwerts surface renewal model, a simple explicit expression of theenhancement factor in ozone absorption with a first order ozone self-decomposition and parallel secondorder ozonation reactions has been derived. The results are compared with our previous work based onthe film theory. The 2,4-dichlorophenol destruction rate by ozonation is predicted using the enhancementfactor model in this paper.
Two new calculational models based on the use of cross-section sensitivity coefficients have been devised for calculating radiation transport in relatively simple shields. The two models, one an exponential model and the other a power model, have been applied, together with the traditional linear model, to 1- and 2-m-thick concrete-slab problems in which the water content, reinforcing-steel content, or composition of the concrete was varied. Comparing the results obtained with the three models with those obtained from exact one-dimensional discrete-ordinates transport calculations indicates that the exponential model, named the BEST model (for basic exponential shielding trend), is a particularly promising predictive tool for shielding problems dominated by exponential attenuation. When applied to a deep-penetration sodium problem, the BEST model also yields better results than do calculations based on second-order sensitivity theory
XUAN Ying; XIANG JunHua; ZHANG WeiHua; ZHANG YuLin
In the process of multidisciplinary design optimization, there exits the calculation complexity problem due to frequently calling high fidelity system analysis models. The high fidelity system analysis models can be surrogated by approximate models. The sensitivity analysis and numerical noise filtering can be done easily by coupling approximate models to optimization. Approximate models can reduce the number of executions of the problem's simulation code during optimization, so the solution efficiency of the multidisciplinary design optimization problem can be improved. Most optimization methods are based on gradient. The gradients of the objective and constrain functions are gained easily. The gradient-based Kriging (GBK) approximate model can be constructed by using system response value and its gradients. The gradients can greatly improve prediction precision of system response. The hybrid optimization method is constructed by coupling GBK approximate models to gradient-based optimization methods. An aircraft aerodynamics shape optimization design example indicates that the methods of this paper can achieve good feasibility and validity.
Negative selection against protein instability is a central influence on evolution of proteins. Protein stability is maintained over evolution despite changes in underlying sequences. An empirical all-site stability-based model of evolution was developed to focus on the selection of residues arising from their contributions to protein stability. In this model, site rates could vary. A structure-based method was used to predict stationary frequencies of hemoglobin residues based on their prope...
Kuan Chee Houng; Bharanidharan Shanmugam; Ganthan Narayana Samy; Sameer Hasan Albakri; Azuan Ahmad
The Microsoft-based mobile sales management application is a sales force management application that currently running on Windows Mobile 6.5. It handles sales-related activity and cuts down the administrative task of sales representative. Then, Windows launch a new mobile operating system, Windows Phone and stop providing support to Windows Mobile. This has become an obstacle for Windows Mobile development. From time to time, Windows Mobile will be eliminated from the market due to no support...
This study suggests a new prediction model for chaotic time series inspired by the brain emotional learning of mammals. We describe the structure and function of this model, which is referred to as BELPM (Brain Emotional Learning-Based Prediction Model). Structurally, the model mimics the connection between the regions of the limbic system, and functionally it uses weighted k nearest neighbors to imitate the roles of those regions. The learning algorithm of BELPM is defined using steepest des...
Milazzo, Paolo; 10.4204/EPTCS.33
This volume contains the papers presented at the first International Workshop on Applications of Membrane Computing, Concurrency and Agent-based Modelling in Population Biology (AMCA-POP 2010) held in Jena, Germany on August 25th, 2010 as a satellite event of the 11th Conference on Membrane Computing (CMC11). The aim of the workshop is to investigate whether formal modelling and analysis techniques could be applied with profit to systems of interest for population biology and ecology. The considered modelling notations include membrane systems, Petri nets, agent-based notations, process calculi, automata-based notations, rewriting systems and cellular automata. Such notations enable the application of analysis techniques such as simulation, model checking, abstract interpretation and type systems to study systems of interest in disciplines such as population biology, ecosystem science, epidemiology, genetics, sustainability science, evolution and other disciplines in which population dynamics and interactions...
Full Text Available Current trends in Robotics aim to close the gap that separates technology and humans, bringing novel robotic devices in order to improve human performance. Although robotic exoskeletons represent a breakthrough in mobility enhancement, there are design challenges related to the forces exerted to the users’ joints that result in severe injuries. This occurs due to the fact that most of the current developments consider the joints as noninvariant rotational axes. This paper proposes the use of commercial vision systems in order to perform biomimetic joint design for robotic exoskeletons. This work proposes a kinematic model based on irregular shaped cams as the joint mechanism that emulates the bone-to-bone joints in the human body. The paper follows a geometric approach for determining the location of the instantaneous center of rotation in order to design the cam contours. Furthermore, the use of a commercial vision system is proposed as the main measurement tool due to its noninvasive feature and for allowing subjects under measurement to move freely. The application of this method resulted in relevant information about the displacements of the instantaneous center of rotation at the human knee joint.
Ichiba, Tomoyuki; Shkolnikov, Mykhaylo
We determine rates of convergence of rank-based interacting diffusions and semimartingale reflecting Brownian motions to equilibrium. Convergence rate for the total variation metric is derived using Lyapunov functions. Sharp fluctuations of additive functionals are obtained using Transportation Cost-Information inequalities for Markov processes. We work out various applications to the rank-based abstract equity markets used in Stochastic Portfolio Theory. For example, we produce quantitative bounds, including constants, for fluctuations of market weights and occupation times of various ranks for individual coordinates. Another important application is the comparison of performance between symmetric functionally generated portfolios and the market portfolio. This produces estimates of probabilities of "beating the market".
Full Text Available The Tensor Product (TP model transformation is a recently proposed techniquefor transforming given Linear Parameter Varying (LPV state-space models into polytopicmodel form, namely, to parameter varying convex combination of Linear Time Invariant(LTI systems. The main advantage of the TP model transformation is that it is executablein a few minutes and the Linear Matrix Inequality (LMI-based control design frameworkscan immediately be applied to the resulting polytopc models to yield controllers withtractable and guaranteed performance. Various applications of the TP modeltransformation-based design were studied via academic complex and benchmark problems,but no real experimental environment-based study was published. Thus, the main objectiveof this paper is to study how the TP model transformation performs in a real world problemand control setup. The laboratory concept for TP model-based controller design,simulation and real time running on an electromechanical system is presented.Development system for TP model-based controller with one hardware/software platformand target system with real-time hardware/ software support are connected in the uniquesystem. Proposed system is based on microprocessor of personal computer (PC forsimulation and software development as well as for real-time control. Control algorithm,designed and simulated in MATLAB/SIMULINK environment, use graphically orientedsoftware interface for real-time code generation. Some specific conflicting industrial tasksin real industrial crane application, such as fast load positioning control and load swingangle minimization, are considered and compared with other controller types.
Full Text Available A fish-eye camera calibration model is presented, basic observations of which consist of both half angle of view and azimuth. Rodrigues matrix is introduced into the model, and three Rodrigues parameters instead of Euler angles are used to represent elements of exterior orientation in order to simplify the expressions and calculations of observation equations.The new model is compared with the existing models based on half angle of view constraint by actual star-map data processing, and the results indicate that the model is superior to control the azimuth error, while slightly inferior to constrain the error of half angle of view. It is advised that radial distortion parameters should be determined by the model based on half angle of view constraint at first, and other camera parameters should be calculated by the new model.
Despite a short history of the Web development, Web-related technologies are rapidly develop-ing. However, the Web application quality is improving slowly, which requires efficient methods for devel-oping Web systems. This study presents a model for Web-based software development for analysis and design phases based on the ISO/IEC 12207 standard. It describes the methods used to define processes and entities in order to reflect the contents in Web applications. It applies the methodology of Web-Road Map by KCC Information and Technology using this model to the public project. As a result, Web-Road Map is proven to be an efficient model to analyze and design Web-applications.
Christensen, Ole Fredslund
Today geostatistics is used in a number of research areas, among others agricultural and environmental sciences.This thesis concerns data and applications where the classical Gaussian spatial model is not appropriate. A transformation could be used in an attempt to obtain data that are approximat...
Guo, Xiaoqiang; Lu, Zhigang; Wang, Baocheng; Sun, Xiaofeng; Wang, Lei; Guerrero, Josep M.
System modeling and stability analysis is one of the most important issues of inverter-dominated microgrids. It is useful to determine the system stability and optimize the control parameters. The complete small signal models for the inverter-dominated microgrids have been developed which are very...... accurate and could be found in literature. However, the modeling procedure will become very complex when the number of inverters in microgrid is large. One possible solution is to use the reduced-order small signal models for the inverter-dominated microgrids. Unfortunately, the reduced-order small signal...... of the system, while the conventional reduced-order small signal model fails. In addition, the virtual ω-E frame power control method, which deals with the power coupling caused by the line impedance X/R characteristic, has also been chosen as an application example of the proposed modeling technique....
This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.
Full Text Available A novel method is proposed for image segmentation based on probabilistic field theory. This model assumes that the whole pixels of an image and some unknown parameters form a field. According to this model, the pixel labels are generated by a compound function of the field. The main novelty of this model is it consider the features of the pixels and the interdependent among the pixels. The parameters are generated by a novel spatially variant mixture model and estimated by expectation-maximization (EM- based algorithm. Thus, we simultaneously impose the spatial smoothness on the prior knowledge. Numerical experiments are presented where the proposed method and other mixture model-based methods were tested on synthetic and real world images. These experimental results demonstrate that our algorithm achieves competitive performance compared to other methods.
Parry, J; Shidore, S
The accurate prediction of the temperatures of critical electronic parts at the package- board- and system-level is seriously hampered by the lack of reliable, standardised input data for the characterisation of the thermal $9 behaviour of these parts. The recently completed collaborative European project, DELPHI has been concerned with the creation and experimental validation of thermal models (both detailed and compact) of a range of electronic parts, $9 including mono-chip packages. This paper demonstrates the reliable performance of thermal compact models in a range of applications, by comparison with the detailed models from which they were derived. (31 refs).
程江; 杨卓如; 陈焕钦; C.H.Kuo; M.E.Zappi
Based on the Danckwerts surface renewal model, a simple explicit expression of the enhancement factor in ozone absorption with a first order ozone self-decomposition and parallel second order ozonation reactions has been derived. The results are compared with our previous work based on the film theory. The 2,4-dichlorophenol destruction rate by ozonation is predicted using the enhancement factor model in this paper.
Morris, Carla C
Features step-by-step examples based on actual data and connects fundamental mathematical modeling skills and decision making concepts to everyday applicability Featuring key linear programming, matrix, and probability concepts, Finite Mathematics: Models and Applications emphasizes cross-disciplinary applications that relate mathematics to everyday life. The book provides a unique combination of practical mathematical applications to illustrate the wide use of mathematics in fields ranging from business, economics, finance, management, operations research, and the life and social sciences.
de la Torre, Jimmy
Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used…
Delhalle, Laurent; Adolphe, Ysabelle; Crevecoeur, Sébastien; Didimo Imazaki, Pedro Henrique; Daube, Georges; Clinquart, Antoine
Predictive microbiology is considered by the European legislation as a tool to control food safety. Meat and meat products are particularly sensitive to contamination with pathogens. However, development of predictive microbiology models and interpretation of results require specific knowledge. A free web based model has been developed for an easy use by people who are not experts in this field as industries and public authorities. The model can simulate the growth of Salmonella spp, Listeria...
Huh, Yeamin; Hutmacher, Matthew M
Parametric models used in time to event analyses are evaluated typically by survival-based visual predictive checks (VPC). Kaplan-Meier survival curves for the observed data are compared with those estimated using model-simulated data. Because the derivative of the log of the survival curve is related to the hazard--the typical quantity modeled in parametric analysis--isolation, interpretation and correction of deficiencies in the hazard model determined by inspection of survival-based VPC's is indirect and thus more difficult. The purpose of this study is to assess the performance of nonparametric hazard estimators of hazard functions to evaluate their viability as VPC diagnostics. Histogram-based and kernel-smoothing estimators were evaluated in terms of bias of estimating the hazard for Weibull and bathtub-shape hazard scenarios. After the evaluation of bias, these nonparametric estimators were assessed as a method for VPC evaluation of the hazard model. The results showed that nonparametric hazard estimators performed reasonably at the sample sizes studied with greater bias near the boundaries (time equal to 0 and last observation) as expected. Flexible bandwidth and boundary correction methods reduced these biases. All the nonparametric estimators indicated a misfit of the Weibull model when the true hazard was a bathtub shape. Overall, hazard-based VPC plots enabled more direct interpretation of the VPC results compared to survival-based VPC plots. PMID:26563504
Yilmaz, Koray; Derin, Yagmur
Accuracy and reliability of hydrological modeling studies heavily depends on quality and availability of precipitation estimates. However hydrological studies in developing countries, especially over complex topography, are limited due to unavailability and scarcity of ground-based networks. In this study we evaluate three different satellite-based rainfall retrieval algorithms namely, Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis (TMPA), NOAA/Climate Prediction Center Morphing Method (CMORPH) and EUMETSAT's Multi-Sensor Precipitation Estimate (MPE) over orographically complex Western Black Sea Basin in Turkey, using a relatively dense rain gauge network. Our results indicated that satellite-based products significantly underestimated the rainfall in regions characterized by orographic rainfall and overestimated the rainfall in the drier regions with seasonal dependency. Further, we devised a new bias adjustment algorithm for the satellite-based rainfall products based on the "physiographic similarity" concept. Our results showed that proposed bias adjustment algorithm is better suited to regions with complex topography and provided improved results compared to the baseline "inverse distance weighting" method. To evaluate the utility of satellite-based products in hydrologic modeling studies, we implemented the MIKE SHE-MIKE 11 integrated fully distributed physically based hydrological model in the study region driven by ground-based and satellite-based precipitation estimates. Model parameter estimation was performed using a constrained calibration approach guided by multiple "signature measures" to estimate model parameters in a hydrologically meaningful way rather than using the traditional "statistical" objective functions that largely mask valuable hydrologic information during calibration process. In this presentation we will provide a discussion of evaluation and bias correction of the satellite-based precipitation products and
Campbell, James; Hyer, Edward; Zhang, Jianglong; Reid, Jeffrey; Westphal, Douglas; Xian, Peng; Vaughan, Mark
Modeling the instantaneous three-dimensional aerosol field and its downwind transport represents an endeavor with many practical benefits foreseeable to air quality, aviation, military and science agencies. The recent proliferation of multi-spectral active and passive satellite-based instruments measuring aerosol physical properties has served as an opportunity to develop and refine the techniques necessary to make such numerical modeling applications possible. Spurred by high-resolution global mapping of aerosol source regions, and combined with novel multivariate data assimilation techniques designed to consider these new data streams, operational forecasts of visibility and aerosol optical depths are now available in near real-time1. Active satellite-based aerosol profiling, accomplished using lidar instruments, represents a critical element for accurate analysis and transport modeling. Aerosol source functions, alone, can be limited in representing the macrophysical structure of injection scenarios within a model. Two-dimensional variational (2D-VAR; x, y) assimilation of aerosol optical depth from passive satellite observations significantly improves the analysis of the initial state. However, this procedure can not fully compensate for any potential vertical redistribution of mass required at the innovation step. The expense of an inaccurate vertical analysis of aerosol structure is corresponding errors downwind, since trajectory paths within successive forecast runs will likely diverge with height. In this paper, the application of a newly-designed system for 3D-VAR (x,y,z) assimilation of vertical aerosol extinction profiles derived from elastic-scattering lidar measurements is described [Campbell et al., 2009]. Performance is evaluated for use with the U. S. Navy Aerosol Analysis and Prediction System (NAAPS) by assimilating NASA/CNES satellite-borne Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) 0.532 μm measurements [Winker et al., 2009
Murphy, Thomas Brendan; Dean, Nema; Raftery, Adrian E.
Food authenticity studies are concerned with determining if food samples have been correctly labeled or not. Discriminant analysis methods are an integral part of the methodology for food authentication. Motivated by food authenticity applications, a model-based discriminant analysis method that includes variable selection is presented. The discriminant analysis model is fitted in a semi-supervised manner using both labeled and unlabeled data. The method is shown to give ...
Full Text Available According to the propagation and destruction characteristics of mobile phone viruses, a virus detection model based on the Danger Theory is proposed. This model includes four phases: danger capture, antigen presentation, antibody generation and antibody distribution. In this model, local knowledge of mobile phones is exploited by the agents that are running in mobile phones to discover danger caused by viruses. The Antigen Presenting Cells (APCs present the antigen from mobile phones in the danger zone, and the Decision Center confirms the infection of viruses. After the antibody is generated by self-tolerating using the negative selection algorithm, the Decision Center distributes the antibody to mobile phones. Due to the distributed and cooperative mechanism of artificial immune system, the proposed model lowers the storage and computing consumption of mobile phones. The simulation results show that based on the mobile phone virus detection model, the proposed virus immunization strategy can effectively inhibit the propagation of mobile phone viruses.
YANG Meini; LI Dingfang; YANG Jinbo; XIONG Wei
A fuzzy neural network model is proposed to evaluate water quality. The model contains two parts: first, fuzzy mathematics theory is used to standardize the samples; second, the RBF neural network and the BP neural network are used to train the standardized samples. The proposed model was applied to assess the water quality of 16 sections in 9 rivers in the Shaoguan area in 2005. The evaluation result was compared with that of the RBF neural network method and the reported results in the Shaoguan area in 2005. It indicated that the performance of the proposed fuzzy neural network model is practically feasible in the application of water quality assessment and its operation is simple.
Goodwin, Graham. C.; Medioli, Adrian. M.
Model predictive control has been a major success story in process control. More recently, the methodology has been used in other contexts, including automotive engine control, power electronics and telecommunications. Most applications focus on set-point tracking and use single-sequence optimisation. Here we consider an alternative class of problems motivated by the scheduling of emergency vehicles. Here disturbances are the dominant feature. We develop a novel closed-loop model predictive control strategy aimed at this class of problems. We motivate, and illustrate, the ideas via the problem of fluid deployment of ambulance resources.
Full Text Available The deployment of a fault diagnosis strategy in the Smart Distance Keeping (SDK system with a decentralized architecture is presented. The SDK system is an advanced Adaptive Cruise Control (ACC system implemented in a Renault-Volvo Trucks vehicle to increase safety by overcoming some ACC limitations. One of the main differences between this new system and the classical ACC is the choice of the safe distance. This latter is the distance between the vehicle equipped with the ACC or the SDK system and the obstacle-in-front (which may be another vehicle. It is supposed fixed in the case of the ACC, while variable in the case of the SDK. The variation of this distance depends essentially on the relative velocity between the vehicle and the obstacle-in-front. The main goal of this work is to analyze measurements, issued from the SDK elements, in order to detect, to localize, and to identify some faults that may occur. Our main contribution is the proposition of a decentralized approach permitting to carry out an on-line diagnosis without computing the global model and to achieve most of the work locally avoiding huge extra diagnostic information traffic between components. After a detailed description of the SDK system, this paper explains the model-based decentralized solution and its application to the embedded diagnosis of the SDK system inside Renault-Volvo Truck with five control units connected via a CAN-bus using “Hardware in the Loop” (HIL technique. We also discuss the constraints that must be fulfilled.
, advanced techniques to fit interatomic potentials consistent with thermodynamics are proposed and the results of their application to the mentioned alloys are presented. Next, the development of advanced methods, based on the use of artificial intelligence, to improve both the physical reliability and the computational efficiency of kinetic Monte Carlo codes for the study of point-defect clustering and phase changes beyond the scale of MD, is reported. These recent progresses bear the promise of being able, in the near future, of producing reliable tools for the description of the microstructure evolution of realistic model alloys under irradiation. (author)
Unified modelling language (UML) 2.0 introduced in 2002 has been developing and influencing object-oriented software engineering and has become a standard and reference for information system analysis and design modelling. There are many concepts and theories to model the information system or software application with UML 2.0, which can make ambiguities and inconsistencies for a novice to learn to how to model the system with UML especially with UML 2.0. This article will discuss how to model the simple software application by using some of the diagrams of UML 2.0 and not by using the whole diagrams as suggested by agile methodology. Agile methodology is considered as convenient for novices because it can deliver the information technology environment to the end-user quickly and adaptively with minimal documentation. It also has the ability to deliver best performance software application according to the customer's needs. Agile methodology will make simple model with simple documentation, simple team and si...
In the process of multidisciplinary design optimization, there exits the calculation complexity problem due to frequently calling high fidelity system analysis models. The high fidelity system analysis models can be surrogated by approximate models. The sensitivity analysis and numerical noise filtering can be done easily by coupling approximate models to optimization. Approximate models can reduce the number of executions of the problem’s simulation code during optimization, so the solution efficiency of the multidisciplinary design optimization problem can be improved. Most optimization methods are based on gradient. The gradients of the objective and constrain functions are gained easily. The gra- dient-based Kriging (GBK) approximate model can be constructed by using system response value and its gradients. The gradients can greatly improve prediction precision of system response. The hybrid optimization method is constructed by coupling GBK approximate models to gradient-based optimiza- tion methods. An aircraft aerodynamics shape optimization design example indicates that the methods of this paper can achieve good feasibility and validity.
Grancharova, Alexandra; Pereira, Fernando
This book deals with optimization methods as tools for decision making and control in the presence of model uncertainty. It is oriented to the use of these tools in engineering, specifically in automatic control design with all its components: analysis of dynamical systems, identification problems, and feedback control design. Developments in Model-Based Optimization and Control takes advantage of optimization-based formulations for such classical feedback design objectives as stability, performance and feasibility, afforded by the established body of results and methodologies constituting optimal control theory. It makes particular use of the popular formulation known as predictive control or receding-horizon optimization. The individual contributions in this volume are wide-ranging in subject matter but coordinated within a five-part structure covering material on: · complexity and structure in model predictive control (MPC); · collaborative MPC; · distributed MPC; · optimization-based analysis and desi...
Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea
This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.
Álvaro, Roberto; González, Jairo; Fraile Ardanuy, José Jesús; Knapen, Luk; JANSSENS, Davy
This paper describes the impact of electric mobility on the transmission grid in Flanders region (Belgium), using a micro-simulation activity based models. These models are used to provide temporal and spatial estimation of energy and power demanded by electric vehicles (EVs) in different mobility zones. The increment in the load demand due to electric mobility is added to the background load demand in these mobility areas and the effects over the transmission substations are analyzed. From t...
Full Text Available A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models.
Chuang, Huan-Ming; Lin, Chien-Ku; Chen, Da-Ren; Chen, You-Shyang
Ecological degradation is an escalating global threat. Increasingly, people are expressing awareness and priority for concerns about environmental problems surrounding them. Environmental protection issues are highlighted. An appropriate information technology tool, the growing popular social network system (virtual community, VC), facilitates public education and engagement with applications for existent problems effectively. Particularly, the exploration of related involvement behavior of VC member engagement is an interesting topic. Nevertheless, member engagement processes comprise interrelated sub-processes that reflect an interactive experience within VCs as well as the value co-creation model. To address the top-focused ecotourism VCs, this study presents an application of a hybrid expert-based ISM model and DEMATEL model based on multi-criteria decision making tools to investigate the complex multidimensional and dynamic nature of member engagement. Our research findings provide insightful managerial implications and suggest that the viral marketing of ecotourism protection is concerned with practitioners and academicians alike. PMID:24453902
Jouili, Salim; Tabbone, Salvatore
Version finale disponible : www.springerlink.com International audience In this paper, we introduce a prototype-based clustering algorithm dealing with graphs. We propose a hypergraph-based model for graph data sets by allowing clusters overlapping. More precisely, in this representation one graph can be assigned to more than one cluster. Using the concept of the graph median and a given threshold, the proposed algorithm detects automatically the number of classes in the graph database....
ZENG Guangming; SU Xiaokang; HUANG Guohe; XIE Gengxin
The finite element method is one of the typical methods that are used for numerical water quality modeling of the topographically complicated river. In this paper, based on the principle of probability theory the probability density of pollutants is introduced. A new model for the grid size optimization based on the finite element method is developed with the incorporation of the maximum information entropy theory when the length of the grid is given. Combined with the experiential evaluation approach of the flow discharge per unit river width, this model can be used to determine the grid size of the finite element method applied to water quality modeling of the topographically complicated river when the velocity field of the river is not given. The calculating results of the application of the model to an ideal river testified the correctness of the model. In a practical case-the application of the model to the Xingjian River (the Hengyang section of the Xiangjiang River), the optimized width of the grid of the finite element method was gained and the influence of parameters was studied, which demonstrated that the model reflected the real situation of the pollutants in the river, and that the model had many excellent characteristics such as stabilization, credibility and high applicability in practical applications.
The development of extensive experimental nuclear data base over the past three decades has been accompanied by parallel advancement of nuclear theory and models used to describe and interpret the measurements. This theoretical capability is important because of many nuclear data requirements that are still difficult, impractical, or even impossible to meet with present experimental techniques. Examples of such data needs are neutron cross sections for unstable fission products, which are required for neutron absorption corrections in reactor calculations; cross sections for transactinide nuclei that control production of long-lived nuclear wastes; and the extensive dosimetry, activation, and neutronic data requirements to 40 MeV that must accompany development of the Fusion Materials Irradation Test (FMIT) facility. In recent years systematic improvements have been made in the nuclear models and codes used in data evaluation and, most importantly, in the methods used to derive physically based parameters for model calculations. The newly issued ENDF/B-V evaluated data library relies in many cases on nuclear reaction theory based on compound-nucleus Hauser-Feshbach, preequilibrium and direct reaction mechanisms as well as spherical and deformed optical-model theories. The development and applications of nuclear models for data evaluation are discussed with emphasis on the 1 to 40 MeV neutron energy range
Full Text Available The concept of physiologically based pharmacokinetic (PBPK modeling was introduced years ago, but it has not been practiced significantly. However, interest in and implementation of this modeling technique have grown, as evidenced by the increased number of publications in this field. This paper demonstrates briefly the methodology, applications, and limitations of PBPK modeling with special attention given to discuss the use of PBPK models in pediatric drug development and some examples described in detail. Although PBPK models do have some limitations, the potential benefit from PBPK modeling technique is huge. PBPK models can be applied to investigate drug pharmacokinetics under different physiological and pathological conditions or in different age groups, to support decision-making during drug discovery, to provide, perhaps most important, data that can save time and resources, especially in early drug development phases and in pediatric clinical trials, and potentially to help clinical trials become more “confirmatory” rather than “exploratory”.
BAI Yuchuan; ZHANG Yinqi; ZHANG Bin
The existing numerical models for nearshore waves are briefly introduced, and the third-generation numerical model for shallow water wave, which makes use of the most advanced productions of wave research and has been adapted well to be used in the environment of seacoast, lake and estuary area, is particularly discussed. The applied model realizes the significant wave height distribution at different wind directions. To integrate the model into the coastal area sediment, sudden deposition mechanism, the distribution of average silt content and the change of sediment sudden deposition thickness over time in the nearshore area are simulated. The academic productions can give some theoretical guidance to the applications of sediment sudden deposition mechanism for stormy waves in the coastal area. And the advancing directions of sediment sudden deposition model are prospected.
Full Text Available The evaluation of the radiation hazards on components used in space environment is based on the knowledge of the radiation level encountered on orbit. The models that are widely used to assess the near-Earth environment for a given mission are empirical trapped radiation models derived from a compilation of spacecraft measurements. However, these models are static and hence are not suited for describing the short timescale variations of geomagnetic conditions. The transient observation-based particle (TOP-model tends to break with this classical approach by introducing dynamic features based on the observation and characterization of transient particle flux events in addition to classical mapping of steady-state flux levels. In order to get a preliminary version of an operational model (actually only available for electrons at low Earth orbit, LEO, (i the steady-state flux level, (ii the flux enhancements probability distribution functions, and (iii the flux decay-time constants (at given energy and positions in space were determined, and an original dynamic model skeleton with these input parameters has been developed. The methodology is fully described and first flux predictions from the model are presented. In order to evaluate the net effects of radiation on a component, it is important to have an efficient tool that calculates the transfer of the outer radiation environment through the spacecraft material, toward the location of the component under investigation. Using the TOP-model space radiation fluxes and the transmitted radiation environment characteristics derived through GEANT4 calculations, a case study for electron flux/dose variations in a small silicon volume is performed. Potential cases are assessed where the dynamic of the spacecraft radiation environment may have an impact on the observed radiation effects.
Su, Chung-Ho; Cheng, Ching-Hsue
This study aims to explore the factors in a patient's rehabilitation achievement after a total knee replacement (TKR) patient exercises, using a PCA-ANFIS emotion model-based game rehabilitation system, which combines virtual reality (VR) and motion capture technology. The researchers combine a principal component analysis (PCA) and an adaptive…
费正顺; 胡斌; 叶鲁彬; 梁军
Since it is often difficult to build differential algebraic equations (DAEs) for chemical processes, a new data-based modeling approach is proposed using ARX (AutoRegressive with eXogenous inputs) combined with neural network under partial least squares framework (ARX-NNPLS), in which less specific knowledge of the process is required but the input and output data. To represent the dynamic and nonlinear behavior of the process, the ARX combined with neural network is used in the partial least squares (PLS) inner model between input and output latent variables. In the proposed dynamic optimization strategy based on the ARX-NNPLS model, neither parameterization nor iterative solving process for DAEs is needed as the ARX-NNPLS model gives a proper representation for the dynamic behavior of the process, and the computing time is greatly reduced compared to conventional control vector parameterization method. To demonstrate the ARX-NNPLS model based optimization strategy, the polyethylene grade transition in gas phase fluidized-bed reactor is taken into account. The optimization results show that the final optimal trajectory of quality index determined by the new approach moves faster to the target values and the computing time is much less.
A function based nonlinear least squares estimation (FNLSE) method is proposed and investigated in parameter estimation of Jelinski-Moranda software reliability model. FNLSE extends the potential fitting functions of traditional least squares estimation (LSE), and takes the logarithm transformed nonlinear least squares estimation (LogLSE) as a special case. A novel power transformation function based nonlinear least squares estimation (powLSE) is proposed and applied to the parameter estimation of Jelinski-Moranda model. Solved with Newton-Raphson method, Both LogLSE and powLSE of Jelinski-Moranda models are applied to the mean time between failures (MTBF) predications on six standard software failure time data sets. The experimental results demonstrate the effectiveness of powLSE with optimal power index compared to the classical least--squares estimation (LSE), maximum likelihood estimation (MLE) and LogLSE in terms of recursively relative error (RE) index and Braun statistic index.
Panning, M. P.; Marone, F.; Kim, A.; Capdeville, Y.; Cupillard, P.; Gung, Y.; Romanowicz, B.
We introduce several recent improvements to mode-based 3D and asymptotic waveform modeling and examine how to integrate them with numerical approaches for an improved model of upper-mantle structure under eastern Eurasia. The first step in our approach is to create a large-scale starting model including shear anisotropy using Nonlinear Asymptotic Coupling Theory (NACT; Li and Romanowicz, 1995), which models the 2D sensitivity of the waveform to the great-circle path between source and receiver. We have recently improved this approach by implementing new crustal corrections which include a non-linear correction for the difference between the average structure of several large regions from the global model with further linear corrections to account for the local structure along the path between source and receiver (Marone and Romanowicz, 2006; Panning and Romanowicz, 2006). This model is further refined using a 3D implementation of Born scattering (Capdeville, 2005). We have made several recent improvements to this method, in particular introducing the ability to represent perturbations to discontinuities. While the approach treats all sensitivity as linear perturbations to the waveform, we have also experimented with a non-linear modification analogous to that used in the development of NACT. This allows us to treat large accumulated phase delays determined from a path-average approximation non-linearly, while still using the full 3D sensitivity of the Born approximation. Further refinement of shallow regions of the model is obtained using broadband forward finite-difference waveform modeling. We are also integrating a regional Spectral Element Method code into our tomographic modeling, allowing us to move beyond many assumptions inherent in the analytic mode-based approaches, while still taking advantage of their computational efficiency. Illustrations of the effects of these increasingly sophisticated steps will be presented.
Damon M. Chandler
Full Text Available The ability of an image region to hide or mask a given target signal continues to play a key role in the design of numerous image processing and vision systems. However, current state-of-the-art models of visual masking have been optimized for artificial targets placed upon unnatural backgrounds. In this paper, we (1 measure the ability of natural-image patches in masking distortion; (2 analyze the performance of a widely accepted standard masking model in predicting these data; and (3 report optimal model parameters for different patch types (textures, structures, and edges. Our results reveal that the standard model of masking does not generalize across image type; rather, a proper model should be coupled with a classification scheme which can adapt the model parameters based on the type of content contained in local image patches. The utility of this adaptive approach is demonstrated via a spatially adaptive compression algorithm which employs patch-based classification. Despite the addition of extra side information and the high degree of spatial adaptivity, this approach yields an efficient wavelet compression strategy that can be combined with very accurate rate-control procedures.
Full Text Available Abstract The ability of an image region to hide or mask a given target signal continues to play a key role in the design of numerous image processing and vision systems. However, current state-of-the-art models of visual masking have been optimized for artificial targets placed upon unnatural backgrounds. In this paper, we (1 measure the ability of natural-image patches in masking distortion; (2 analyze the performance of a widely accepted standard masking model in predicting these data; and (3 report optimal model parameters for different patch types (textures, structures, and edges. Our results reveal that the standard model of masking does not generalize across image type; rather, a proper model should be coupled with a classification scheme which can adapt the model parameters based on the type of content contained in local image patches. The utility of this adaptive approach is demonstrated via a spatially adaptive compression algorithm which employs patch-based classification. Despite the addition of extra side information and the high degree of spatial adaptivity, this approach yields an efficient wavelet compression strategy that can be combined with very accurate rate-control procedures.
Rambow, Mark; Preuss, Thomas; Berdux, Jörg; Conrad, Marc
Simplicity is the major advantage of REST based webservices. Whereas SOAP is widespread in complex, security sensitive business-to-business aplications, REST is widely used for mashups and end-user centric applicatons. In that context we give an overview of REST and compare it to SOAP. Furthermore we apply the GeoDrawing application as an example for REST based mobile applications and emphasize on pros and cons for the use of REST in mobile application scenarios.
Full Text Available One of the efforts to increase people's income is by applying economic models that are suitable for the community. This model places emphasis on lower class community economic empowerment so that eventually they can be economically self-sufficient. To apply an economy model suitable for that particular source, around the area of Sungai Melayu, particularly Piansak village is one of the efforts suggested. The method applied for this research is Action Research. For the first year, the main task is to identify the local area. Only after a feasibility study being done on the data found, then a system of Society Economy Model will be implemented accordingly. The further steps will follow based on the result yielded of first-year-research. The potential businesses suitable to the area, possibly will be businesses in raising livestock or poultry, such as
Frear, D.R.; Rashid, M.M.; Burchett, S.N.
We present a new methodology for predicting the fatigue life of solder joints for electronics applications. This approach involves integration of experimental and computational techniques. The first stage involves correlating the manufacturing and processing parameters with the starting microstructure of the solder joint. The second stage involves a series of experiments that characterize the evolution of the microstructure during thermal cycling. The third stage consists of a computer modeling and simulation effort that utilizes the starting microstructure and experimental data to produce a reliability prediction of the solder joint. This approach is an improvement over current methodologies because it incorporates the microstructure and properties of the solder directly into the model and allows these properties to evolve as the microstructure changes during fatigue.
Linde, Niklas; Lochbühler, Tobias; Dogan, Mine; Van Dam, Remke L.
We propose a new framework to compare alternative geostatistical descriptions of a given site. Multiple realizations of each of the considered geostatistical models and their corresponding tomograms (based on inversion of noise-contaminated simulated data) are used as a multivariate training image. The training image is scanned with a direct sampling algorithm to obtain conditional realizations of hydraulic conductivity that are not only in agreement with the geostatistical model, but also honor the spatially varying resolution of the site-specific tomogram. Model comparison is based on the quality of the simulated geophysical data from the ensemble of conditional realizations. The tomogram in this study is obtained by inversion of cross-hole ground-penetrating radar (GPR) first-arrival travel time data acquired at the MAcro-Dispersion Experiment (MADE) site in Mississippi (USA). Various heterogeneity descriptions ranging from multi-Gaussian fields to fields with complex multiple-point statistics inferred from outcrops are considered. Under the assumption that the relationship between porosity and hydraulic conductivity inferred from local measurements is valid, we find that conditioned multi-Gaussian realizations and derivatives thereof can explain the crosshole geophysical data. A training image based on an aquifer analog from Germany was found to be in better agreement with the geophysical data than the one based on the local outcrop, which appears to under-represent high hydraulic conductivity zones. These findings are only based on the information content in a single resolution-limited tomogram and extending the analysis to tracer or higher resolution surface GPR data might lead to different conclusions (e.g., that discrete facies boundaries are necessary). Our framework makes it possible to identify inadequate geostatistical models and petrophysical relationships, effectively narrowing the space of possible heterogeneity representations.
Full Text Available In order to safeguard the safety of passengers and reducemaintenance costs, it is necessary to analyze and evaluate the security risk ofthe Railway Signal System. However, the conventional Fuzzy Analytical HierarchyProcess (FAHP can not describe the fuzziness and randomness of the judgment,accurately, and once the fuzzy sets are described using subjection degreefunction, the concept of fuzziness will be no longer fuzzy. Thus Fuzzy-FMECAmethod based on cloud model is put forward. Failure Modes Effects andCriticality Analysis (FMECA method is used to identify the risk and FAHP basedon cloud model is used for determining the subjection degree function in fuzzymethod, finally the group decision can be gained with the syntheticallyaggregated cloud model, the method’s feasibility and effectiveness are shown inthe practical examples. Finally Fuzzy-FMECA based on cloud model and theconventional FAHP are used to assess the risk respectively, evaluation resultsshow that the cloud model which is introduced into the risk assessment ofRailway Signal System can realize the transition between precise value andquality value by combining the fuzziness and randomness and provide moreabundant information than subjection degree function of the conventional FAHP.
Niu, Jie; Phanikumar, Mantha S.
Distributed hydrologic models that simulate fate and transport processes at sub-daily timescales are useful tools for estimating pollutant loads exported from watersheds to lakes and oceans downstream. There has been considerable interest in the application of integrated process-based hydrologic models in recent years. While the models have been applied to address questions of water quantity and to better understand linkages between hydrology and land surface processes, routine applications of these models to address water quality issues are currently limited. In this paper, we first describe a general process-based watershed-scale solute transport modeling framework, based on an operator splitting strategy and a Lagrangian particle transport method combined with dispersion and reactions. The transport and the hydrologic modules are tightly coupled and the interactions among different hydrologic components are explicitly modeled. We test transport modules using data from plot-scale experiments and available analytical solutions for different hydrologic domains. The numerical solutions are also compared with an analytical solution for groundwater transit times with interactions between surface and subsurface flows. Finally, we demonstrate the application of the model to simulate bacterial fate and transport in the Red Cedar River watershed in Michigan and test hypotheses about sources and transport pathways. The watershed bacterial fate and transport model is expected to be useful for making near real-time predictions at marine and freshwater beaches.
Proudman Christopher J
Full Text Available Abstract Background Colic is an important cause of mortality and morbidity in domesticated horses yet many questions about this condition remain to be answered. One such question is: does season have an effect on the occurrence of colic? Time-series analysis provides a rigorous statistical approach to this question but until now, to our knowledge, it has not been used in this context. Traditional time-series modelling approaches have limited applicability in the case of relatively rare diseases, such as specific types of equine colic. In this paper we present a modelling approach that respects the discrete nature of the count data and, using a regression model with a correlated latent variable and one with a linear trend, we explored the seasonality of specific types of colic occurring at a UK referral hospital between January 1995–December 2004. Results Six- and twelve-month cyclical patterns were identified for all colics, all medical colics, epiploic foramen entrapment (EFE, equine grass sickness (EGS, surgically treated and large colon displacement/torsion colic groups. A twelve-month cyclical pattern only was seen in the large colon impaction colic group. There was no evidence of any cyclical pattern in the pedunculated lipoma group. These results were consistent irrespective of whether we were using a model including latent correlation or trend. Problems were encountered in attempting to include both trend and latent serial dependence in models simultaneously; this is likely to be a consequence of a lack of power to separate these two effects in the presence of small counts, yet in reality the underlying physical effect is likely to be a combination of both. Conclusion The use of a regression model with either an autocorrelated latent variable or a linear trend has allowed us to establish formally a seasonal component to certain types of colic presented to a UK referral hospital over a 10 year period. These patterns appeared to coincide
Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
Yuanyuan Chuai; Keyan Xiao; Yihua Xuan; Shaobin Zhan
Geological data are usually of the characteristics of multi-source, large amount and multi-scale. The construction of Spatial Information Grid overcomes the shortages of personal computers when dealing with geological data. The authors introduce the definition, architecture and flow of mineral resources assessment by weights of evidence model based on Spatial Information Grid (SIG). Meanwhile, a case study on the prediction of copper mineral occurrence in the Middle-Lower Yangtze metallogenic belt is given. The results show that mineral resources assessement based on SIG is an effective new method which provides a way of sharing and integrating distributed geospatial information and improves the efficiency greatly.
Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
Candy, J V; Chambers, D H; Breitfeller, E F; Guidry, B L; Verbeke, J M; Axelrod, M A; Sale, K E; Meyer, A M
The detection of radioactive contraband is a critical problem in maintaining national security for any country. Emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. The development of a model-based sequential Bayesian processor that captures both the underlying transport physics including scattering offers a physics-based approach to attack this challenging problem. It is shown that this processor can be used to develop an effective detection technique.
Joyce, António; Coelho, Luis; Martins, João F.; Tavares, Nelson; R Pereira; Magalhães, Pedro
A solar trigeneration system, based on photovoltaic-thermal (PV/T) collectors, photovoltaic (PV) modules and a heat pump unit for heating and cooling, is modelled to forecast the thermal and electric yields of the system. The aim of the trigeneration system is to provide enough electricity, domestic hot water (DHW), heating and cooling power to meet the typical demand of an urban single family dwelling with limited roof area and allow the household to achieve a positive net energy status. The...
Olivier Maget, Nelly; Hétreux, Gilles; Le Lann, Jean-Marc
The complexity and the size of the industrial chemical processes induce the monitoring of a growing number of process variables. Their knowledge is generally based on the measurements of system variables and on the physico-chemical models of the process. Nevertheless, this information is imprecise because of process and measurement noise. So the research ways aim at developing new and more powerful techniques for the detection of process fault. In this work, we present a method for the fault ...
In this paper we compare two model-based measures of the output gap. The first measure, as proposed by Gali (2011), defines output gap as the difference between actual output and the output level that would be if the economy operates under a perfectly competitive market without price or wage stickiness. We used annual data of relevant variables for Thailand and computed the output gap under this approach. The calculated output gap for Thailand shows that the Thai economy performs consistently...
Bouaziz, Rahma; Kallel, Slim; Coulette, Bernard
Security engineering with patterns is currently a very active area of research. Security patterns - an adaptation of Design Patterns to security - capture experts' experience in order to solve recurrent security problems in a structured and reusable way. In this paper, our objective is to describe an engineering process, called SCRIP (SeCurity patteRn Integration Process), which provides guidelines for integrating security patterns into component-based models. SCRIP defines activities and pro...
Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik;
A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...... to an industrial benchmark. The benchmark tests have shown that the algorithm is capable of detection and isolation of five different faults in the mechanical and hydraulic parts of the pump....
Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik;
A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...... to an industrial benchmark. The benchmark tests have shown that the algorithm is capable of detection and isolation of five different faults in the mechanical and hydraulic parts of the pump....
Kaganov, A Sh; Kir'yanov, P A
The objective of the present publication was to discuss the possibility of application of cybernetic modeling methods to overcome the apparent discrepancy between two kinds of the speech records, viz. initial ones (e.g. obtained in the course of special investigation activities) and the voice prints obtained from the persons subjected to the criminalistic examination. The paper is based on the literature sources and the materials of original criminalistics expertises performed by the authors. PMID:26245103
Wrable, M.; Liss, A.; Kulinkina, A.; Koch, M.; Biritwum, N. K.; Ofosu, A.; Kosinski, K. C.; Gute, D M; Naumova, E. N.
90% of the worldwide schistosomiasis burden falls on sub-Saharan Africa. Control efforts are often based on infrequent, small-scale health surveys, which are expensive and logistically difficult to conduct. Use of satellite imagery to predictively model infectious disease transmission has great potential for public health applications. Transmission of schistosomiasis requires specific environmental conditions to sustain freshwater snails, however has unknown seasonality, and is difficult to s...
In earlier work published by the author and co-authors, a dynamic engine model called a Mean Value Engine Model (MVEM) was developed. This model is physically based and is intended mainly for control applications. In its newer form, it is easy to fit to many different engines and requires little ...
Jensenius, Henriette; Thaysen, Jacob; Rasmussen, Anette Alsted; Veje, Lars Helt; Hansen, Ole; Boisen, Anja
A recently developed microcantilever probe with integrated piezoresistive readout has been applied as a gas sensor. Resistors, sensitive to stress changes, are integrated on the flexible cantilevers. This makes it possible to monitor the cantilever deflection electrically and with an integrated...... reference cantilever background noise is subtracted directly in the measurement. A polymer coated cantilever has been exposed to vapors of various alcohols and the resulting cantilever response has been interpreted using a simple evaporation model. The model indicates that the cantilever response is a...... direct measure of the molecular concentration of alcohol vapor. On the basis of the model the detection limit of this cantilever-based sensor is determined to be below 10 ppm for alcohol vapor measurements. Furthermore, the time response of the cantilever can be used to distinguish between different...
For successful designing of space radiation Galactic Cosmic Rays (GCRs) model, we develop a soft sensor based on the Artificial Neural Network (ANN) model. At the first step, the soft sensor based ANN was constructed as an alternative to model space radiation environment. The structure of ANN in this model is using Multilayer Perceptron (MLP) and Levenberg Marquardt algorithms with 3 inputs and 2 outputs. In the input variable, we use 12 years data (Corr, Uncorr and Press) of GCR particles obtained from Neutron Monitor of Bartol University (Fort Smith area) and the target output is (Corr and Press) from the same source but for Inuvik area in the Polar Regions. In the validation step, we obtained the Root Mean Square Error (RMSE) value of Corr 3.8670e-004 and Press 1.3414e-004 and Variance Accounted For (VAF) of Corr 99.9839 % and Press 99.9831% during the training section. After all the results obtained, then we applied into a Matlab GUI simulation (soft sensor simulation). This simulation will display the estimation of output value from input (Corr and Press). Testing results showed an error of 0.133% and 0.014% for Corr and Press, respectively
F. S. Marzano
Full Text Available A large set of ground-based multi-frequency microwave radiometric simulations and measurements during different precipitation regimes are analysed. Simulations are performed for a set of frequencies from 22 to 60 GHz, representing the channels currently available on an operational ground-based radiometric system. Results are illustrated in terms of comparisons between measurements and model data in order to show that the observed radiometric signatures can be attributed to rainfall scattering and absorption. An inversion algorithm has been developed, basing on the simulated data, to retrieve rain rate from passive radiometric observations. As a validation of the approach, we have analyzed radiometric measurements during rain events occurred in Boulder, Colorado, and at the Atmospheric Radiation Measurement (ARM Program's Southern Great Plains (SGP site in Lamont, Oklahoma, USA, comparing rain rate estimates with available simultaneous rain gauge data.
The Knowledge of circulation of water in bays, in addition to the possibility of simulation future conditions, can be of great interest in solving problems related to the cooling water for Nuclear Power Plants, study of sediments and water polution, in addition to the study of civil engineering works planned in bays. A Numerical Circulation Model of water in bays, is applied to the conditions of Sepetiba Bay at Rio de Janeiro coast. This System of Partial Differential Equations that constitute the Model, were solved by the Finite Difference Method, using a uniform cartesian grid for uniform time steps generating a bi-dimensional flow measurement of depth. The results obtained by comparing the values of the Model and measurements taken a bay were satisfactory, assuring its credibility and efficiency. A programming code was developed for the application providing outputing at any preditermined time steps, with discrimination of 30 seconds, the average levels, flows, velocities and depths of water of each grid spacing along the length of the bay in addition to a graphic of the flow. (Author)
The extraction - recovery of Pu(IV) from nitric acidic medium has been examined by PEHFSD (Pseudo-emulsion based hollow fiber strip dispersion) in a microporous hydrophobic polypropylene hollow fiber membrane contactor. The technology of supported liquid membranes with strip dispersion has advantages over the conventional supported liquid membranes solving the inherent stability problems of the membrane. Mathematical models for extraction in hollow-fiber contactor were also developed. The mass transfer coefficients were calculated from the experimental results and models are presented. The applicability of PEHFSD technique was demonstrated for recovery of Pu(IV) from oxalate supernatant waste generated during plutonium precipitation by oxalic acid. (author)
Full Text Available A classical way to reduce a radar’s data is to compute the spectrum using FFT and then to identify the different peak contributions. But in case an overlapping between the different echoes (atmospheric echo, clutter, hydrometeor echo. . . exists, Fourier-like techniques provide poor frequency resolution and then sophisticated peak-identification may not be able to detect the different echoes. In order to improve the number of reduced data and their quality relative to Fourier spectrum analysis, three different methods are presented in this paper and applied to actual data. Their approach consists of predicting the main frequency-components, which avoids the development of very sophisticated peak-identification algorithms. The first method is based on cepstrum properties generally used to determine the shift between two close identical echoes. We will see in this paper that this method cannot provide a better estimate than Fourier-like techniques in an operational use. The second method consists of an autoregressive estimation of the spectrum. Since the tests were promising, this method was applied to reduce the radar data obtained during two thunder-storms. The autoregressive method, which is very simple to implement, improved the Doppler-frequency data reduction relative to the FFT spectrum analysis. The third method exploits a MUSIC algorithm, one of the numerous subspace-based methods, which is well adapted to estimate spectra composed of pure lines. A statistical study of performances of this method is presented, and points out the very good resolution of this estimator in comparison with Fourier-like techniques. Application to actual data confirms the good qualities of this estimator for reducing radar’s data.
Key words. Meteorology and atmospheric dynamics (tropical meteorology- Radio science (signal processing- General (techniques applicable in three or more fields
Full Text Available This paper presents an approach for weighting indices in the comprehensive evaluation. In accordance with the principle that the entire difference of various evaluation objects is to be maximally differentiated, an adjusted weighting coefficient is introduced. Based on the idea of maximizing the difference between the adjusted evaluation scores of each evaluation object and their mean, an objective programming model is established with more obvious differentiation between evaluation scores and the combined weight coefficient determined, thereby avoiding contradictory and less distinguishable evaluation results of single weighting methods. The proposed model is demonstrated using 2,044 observations. The empirical results show that the combined weighting method has the least misjudgment probability, as well as the least error probability, when compared with four single weighting methods, namely, G1, G2, variation coefficient, and deviation methods.
To use reasoning knowledge accurately and efficiently,many reasoning methods have been proposed.However,the differences in form among the methods may obstruct the systematical analysis and harmonious integration of them.In this paper,a common reasoning model JUM(Judgement Model)is introduced.According to JUM,a common knowledge representation form is abstracted from different reasoning methods and its limitation is reduced.We also propose an algorithm for transforming one type of JUMs into another.In some cases,the algorithm can be used to resolve the key problem of integrating different types of JUM in one system.It is possible that a new architecture of knowledge-based system can be realized under JUM.
Fiala, P.; Drexler, P.; Nespor, D.
The authors propose an analysis of a model solar element based on the principle of a resonance system facilitating the transformation of the external form of incident energy into electrical energy. A similar principle provides the basis for harvesters designed to operate at lower frequencies, Jirků T., Fiala P. and Kluge M.,2010, Wen J.L., Wen Z., Wong P.K., 2000. In these harvesters, the efficiency of the energy form transformation can be controlled from the frequency spectrum of an external source (the Sun).
Yan Weiwu(阎威武); Shao Huihe
Many industrial process systems are becoming more and more complex and are characterized by distributed features. To ensure such a system to operate under working order, distributed parameter values are often inspected from subsystems or different points in order to judge working conditions of the system and make global decisions. In this paper, a parallel decision model based on Support Vector Machine (PDMSVM) is introduced and applied to the distributed fault diagnosis in industrial process. PDMSVM is convenient for information fusion of distributed system and it performs well in fault diagnosis with distributed features. PDMSVM makes decision based on synthetic information of subsystems and takes the advantage of Support Vector Machine. Therefore decisions made by PDMSVM are highly reliable and accurate.
Full Text Available Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer’s body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost.
Full Text Available Power curve of a wind turbine depicts the relationship between output power and hub height wind speed and is an important characteristic of the turbine. Power curve aids in energy assessment, warranty formulations, and performance monitoring of the turbines. With the growth of wind industry, turbines are being installed in diverse climatic conditions, onshore and offshore, and in complex terrains causing significant departure of these curves from the warranted values. Accurate models of power curves can play an important role in improving the performance of wind energy based systems. This paper presents a detailed review of different approaches for modelling of the wind turbine power curve. The methodology of modelling depends upon the purpose of modelling, availability of data, and the desired accuracy. The objectives of modelling, various issues involved therein, and the standard procedure for power performance measurement with its limitations have therefore been discussed here. Modelling methods described here use data from manufacturers’ specifications and actual data from the wind farms. Classification of modelling methods, various modelling techniques available in the literature, model evaluation criteria, and application of soft computing methods for modelling are then reviewed in detail. The drawbacks of the existing methods and future scope of research are also identified.
Busing, R.T.; Solomon, A.M.; McKane, R.B.; Burdick, C.A.
The FORCLIM model of forest dynamics was tested against field survey data for its ability to simulate basal area and composition of old forests across broad climatic gradients in western Oregon, USA. The model was also tested for its ability to capture successional trends in ecoregions of the west Cascade Range. It was then applied to simulate present and future (1990-2050) forest landscape dynamics of a watershed in the west Cascades. Various regimes of climate change and harvesting in the watershed were considered in the landscape application. The model was able to capture much of the variation in forest basal area and composition in western Oregon even though temperature and precipitation were the only inputs that were varied among simulated sites. The measured decline in total basal area from tall coastal forests eastward to interior steppe was matched by simulations. Changes in simulated forest dominants also approximated those in the actual data. Simulated abundances of a few minor species did not match actual abundances, however. Subsequent projections of climate change and harvest effects in a west Cascades landscape indicated no change in forest dominance as of 2050. Yet, climate-driven shifts in the distributions of some species were projected. The simulation of both stand-replacing and partial-stand disturbances across western Oregon improved agreement between simulated and actual data. Simulations with fire as an agent of partial disturbance suggested that frequent fires of low severity can alter forest composition and structure as much or more than severe fires at historic frequencies. ?? 2007 by the Ecological Society of America.
S. A. Bornyakov
Full Text Available Results of long-term experimental studies and modelling of faulting are briefly reviewed, and research methods and the-state-of-art issues are described. The article presents the main results of faulting modelling with the use of non-transparent elasto-viscous plastic and optically active models. An area of active dynamic influence of fault (AADIF is the term introduced to characterise a fault as a 3D geological body. It is shown that AADIF's width (М is determined by thickness of the layer wherein a fault occurs (Н, its viscosity (η and strain rate (V. Multiple correlation equations are proposed to show relationships between AADIF's width (М, H, η and V for faults of various morphological and genetic types. The irregularity of AADIF in time and space is characterised in view of staged formation of the internal fault structure of such areas and geometric and dynamic parameters of AADIF which are changeable along the fault strike. The authors pioneered in application of the open system conception to find explanations of regularities of structure formation in AADIFs. It is shown that faulting is a synergistic process of continuous changes of structural levels of strain, which differ in manifestation of specific self-similar fractures of various scales. Such levels are changeable due to self-organization processes of fracture systems. Fracture dissipative structures (FDS is the term introduced to describe systems of fractures that are subject to self-organization. It is proposed to consider informational entropy and fractal dimensions in order to reveal FDS in AADIF. Studied are relationships between structure formation in AADIF and accompanying processes, such as acoustic emission and terrain development above zones wherein faulting takes place. Optically active elastic models were designed to simulate the stress-and-strain state of AADIF of main standard types of fault jointing zones and their analogues in nature, and modelling results are
Pan, Bingjun; Zhang, Huichun
The equilibrium Polanyi adsorption potential was reconstructed as ε = -RT ln(Ca(or H)/δ) to correlate the characteristic energy (E) of Polanyi-based models (qe = f[ε/E]) with the properties or structures of absorbates, where qe is the equilibriumn adsorption capacity, Ca(or H) is the converted concentration from the equilibrium aqueous concentration at the same activity and corresponds to the adsorption from the gas or n-hexadecane (HD) phase by the water-wet adsorbent, and "δ" is an arbitrary divisor to converge the model fitting. Subsequently, the modified Dubinin-Astakhov model based on the reconstructed ε was applied to aqueous adsorption on activated carbon, black carbon, multiwalled carbon nanotubes, and polymeric resin. The fitting results yielded intrinsic characteristic energies Ea, derived from aqueous-to-gas phase conversion, or EH, derived from aqueous-to-HD phase conversion, which reflect the contributions of the overall or specific adsorbate-adsorbent interactions to the adsorption. Effects of the adsorbate and adsorbent properties on Ea or EH then emerge that are unrevealed by the original characteristic energy (Eo), i.e., adsorbates with tendency to form stronger interactions with an adsorbent have larger Ea and EH. Additionally, comparison of Ea and EH allows quantitative analysis of the contributions of nonspecific interactions, that is, a significant relationship was established between the nonspecific interactions and Abraham's descriptors for the adsorption of all 32 solutes on the four different adsorbents: (Ea - EH) = 24.7 × V + 9.7 × S - 19.3 (R(2) = 0.97), where V is McGowan's characteristic volume for adsorbates, and S reflects the adsorbate's polarity/polarizability. PMID:24815932
HUA Zu-lin; XING Ling-hang; GU Li
The modified QUICK scheme on unstructured grid was used to improve the advection flux approximation, and the depth-averaged turbulence model with the scheme based on FVM by SIMPLE series algorithm was established and applied to spur-dike flow computation. In this model, the over-relaxed approach was adopted to estimate the diffusion flux in view of its advantages in reducing errors and sustaining numerical stability usually encountered in non-orthogonal meshes. Two spur-dike cases with different defection angles (90oand 135o) were analyzed to validate the model. Computed results show that the predicted velocities and recirculation lengths are in good agreement with the observed data. Moreover, the computations on structured and unstructured grids were compared in terms of the approximately equivalent grid numbers. It can be concluded that the precision with unstructured grids is higher than that with structured grids in spite that the CPU time required is slightly more with unstructured grids. Thus, it is significant to apply the method to numerical simulation of practical hydraulic engineering.
Mizoguchi, Tomohiro; Kanai, Satoshi
Along with the rapid growth of industrial X-ray CT scanning systems, it is now possible to non-destructively acquire the entire meshes of assemblies consisting of a set of parts. For the advanced inspections of the assemblies, such as estimation of their assembling errors or examinations of their behaviors in the motions, based on their CT scanned meshes, it is necessary to accurately decompose the mesh and to extract a set of partial meshes each of which correspond to a part. Moreover it is required to create models which can be used for the real-product based simulations. In this paper, we focus on CT scanned meshes of gear assemblies as examples and propose beneficial methods for establishing such advance inspections of the assemblies. We first propose a method that accurately decomposes the mesh into partial meshes each of which corresponds to a gear based on periodicity recognitions. The key idea is first to accurately recognize the periodicity of each gear and then to extract the partial meshes as sets of topologically connected mesh elements where periodicities are valid. Our method can robustly and accurately recognize periodicities from noisy scanned meshes. In contrast to previous methods, our method can deal with single-material CT scanned meshes and can estimate the correct boundaries of neighboring parts with no previous knowledge. Moreover it can efficiently extract the partial meshes from large scanned meshes containing about one million triangles in a few minutes. We also propose a method for creating simulation models which can be used for a gear teeth contact evaluation using extracted partial meshes and their periodicities. Such an evaluation of teeth contacts is one of the most important functions in kinematic simulations of gear assemblies for predicting the power transmission efficiency, noise and vibration. We demonstrate the effectiveness of our method on a variety of artificial and CT scanned meshes.
Ramos-Scharrón, Carlos E; Macdonald, Lee H
Accelerated erosion and increased sediment yields resulting from changes in land use are a critical environmental problem. Resource managers and decision makers need spatially explicit tools to help them predict the changes in sediment production and delivery due to unpaved roads and other types of land disturbance. This is a particularly important issue in much of the Caribbean because of the rapid pace of development and potential damage to nearshore coral reef communities. The specific objectives of this study were to: (1) develop a GIS-based sediment budget model; (2) use the model to evaluate the effects of unpaved roads on sediment delivery rates in three watersheds on St. John in the US Virgin Islands; and (3) compare the predicted sediment yields to pre-existing data. The St. John Erosion Model (STJ-EROS) is an ArcInfo-based program that uses empirical sediment production functions and delivery ratios to quantify watershed-scale sediment yields. The program consists of six input routines and five routines to calculate sediment production and delivery. The input routines have interfaces that allow the user to adjust the key variables that control sediment production and delivery. The other five routines use pre-set erosion rate constants, user-defined variables, and values from nine data layers to calculate watershed-scale sediment yields from unpaved road travelways, road cutslopes, streambanks, treethrow, and undisturbed hillslopes. STJ-EROS was applied to three basins on St. John with varying levels of development. Predicted sediment yields under natural conditions ranged from 2 to 7Mgkm(-2)yr(-1), while yield rates for current conditions ranged from 8 to 46Mgkm(-2)yr(-1). Unpaved roads are estimated to be increasing sediment delivery rates by 3-6 times for Lameshur Bay, 5-9 times for Fish Bay, and 4-8 times for Cinnamon Bay. Predicted basin-scale sediment yields for both undisturbed and current conditions are within the range of measured sediment yields
Fatichi, Simone; Vivoni, Enrique R.; Odgen, Fred L; Ivanov, Valeriy Y; Mirus, Benjamin B.; Gochis, David; Downer, Charles W; Camporese, Matteo; Davison, Jason H; Ebel, Brian A.; Jones, Norm; Kim, Jongho; Mascaro, Giuseppe; Niswonger, Richard; Restrepo, Pedro; Rigon, Riccardo; Shen, Chaopeng; Sulis, Mauro; Tarboton, David
Process-based hydrological models have a long history dating back to the 1960s. Criticized by some as over-parameterized, overly complex, and difficult to use, a more nuanced view is that these tools are necessary in many situations and, in a certain class of problems, they are the most appropriate type of hydrological model. This is especially the case in situations where knowledge of flow paths or distributed state variables and/or preservation of physical constraints is important. Examples of this include: spatiotemporal variability of soil moisture, groundwater flow and runoff generation, sediment and contaminant transport, or when feedbacks among various Earth’s system processes or understanding the impacts of climate non-stationarity are of primary concern. These are situations where process-based models excel and other models are unverifiable. This article presents this pragmatic view in the context of existing literature to justify the approach where applicable and necessary. We review how improvements in data availability, computational resources and algorithms have made detailed hydrological simulations a reality. Avenues for the future of process-based hydrological models are presented suggesting their use as virtual laboratories, for design purposes, and with a powerful treatment of uncertainty.
Fatichi, Simone; Vivoni, Enrique R.; Ogden, Fred L.; Ivanov, Valeriy Y.; Mirus, Benjamin; Gochis, David; Downer, Charles W.; Camporese, Matteo; Davison, Jason H.; Ebel, Brian; Jones, Norm; Kim, Jongho; Mascaro, Giuseppe; Niswonger, Richard; Restrepo, Pedro; Rigon, Riccardo; Shen, Chaopeng; Sulis, Mauro; Tarboton, David
Process-based hydrological models have a long history dating back to the 1960s. Criticized by some as over-parameterized, overly complex, and difficult to use, a more nuanced view is that these tools are necessary in many situations and, in a certain class of problems, they are the most appropriate type of hydrological model. This is especially the case in situations where knowledge of flow paths or distributed state variables and/or preservation of physical constraints is important. Examples of this include: spatiotemporal variability of soil moisture, groundwater flow and runoff generation, sediment and contaminant transport, or when feedbacks among various Earth's system processes or understanding the impacts of climate non-stationarity are of primary concern. These are situations where process-based models excel and other models are unverifiable. This article presents this pragmatic view in the context of existing literature to justify the approach where applicable and necessary. We review how improvements in data availability, computational resources and algorithms have made detailed hydrological simulations a reality. Avenues for the future of process-based hydrological models are presented suggesting their use as virtual laboratories, for design purposes, and with a powerful treatment of uncertainty.
Boyer, E. [ENS, Cachan (France). LESiR; Petitdidier, M.; Corneil, W. [CETP, Velizy (France); Adnet, C. [THALES Air Dfense, Bagneux (France); Larzabal, P. [ENS, Cachan (France). LESiR; IUT, Cachan (France). CRIIP
A classical way to reduce a radar's data is to compute the spectrum using FFT and then to identify the different peak contributions. But in case an overlapping between the different echoes (atmospheric echo, clutter, hydrometer echo..) exists, Fourier-like techniques provide poor frequency resolution and then sophisticated peak-identification may not be able to detect the different echoes. In order to improve the number of reduced data and their quality relative to Fourier spectrum analysis, three different methods are presented in this paper and applied to actual data. Their approach consists of predicting the main frequency-components, which avoids the development of very sophisticated peak-identification algorithms. The first method is based on cepstrum properties generally used to determine the shift between two close identical echoes. We will see in this paper that this method cannot provide a better estimate than Fourier-like techniques in an operational use. The second method consists of an autoregressive estimation of the spectrum. Since the tests were promising, this method was applied to reduce the radar data obtained during two thunderstorms. The autoregressive method, which is very simple to implement, improved the Doppler-frequency data reduction relative to the FFT spectrum analysis. The third method exploits a MUSIC algorithm, one of the numerous subspace-based methods, which is well adapted to estimate spectra composed of pure lines. A statistical study of performances of this method is presented, and points out the very good resolution of this estimator in comparison with Fourier-like techniques. Application to actual data confirms the good qualities of this estimator for reducing radar's data. (orig.)
Boyer, E.; Petitdidier, M.; Corneil, W.; Adnet, C.; Larzabal, P.
A classical way to reduce a radar’s data is to compute the spectrum using FFT and then to identify the different peak contributions. But in case an overlapping between the different echoes (atmospheric echo, clutter, hydrometeor echo. . . ) exists, Fourier-like techniques provide poor frequency resolution and then sophisticated peak-identification may not be able to detect the different echoes. In order to improve the number of reduced data and their quality relative to Fourier spectrum analysis, three different methods are presented in this paper and applied to actual data. Their approach consists of predicting the main frequency-components, which avoids the development of very sophisticated peak-identification algorithms. The first method is based on cepstrum properties generally used to determine the shift between two close identical echoes. We will see in this paper that this method cannot provide a better estimate than Fourier-like techniques in an operational use. The second method consists of an autoregressive estimation of the spectrum. Since the tests were promising, this method was applied to reduce the radar data obtained during two thunder-storms. The autoregressive method, which is very simple to implement, improved the Doppler-frequency data reduction relative to the FFT spectrum analysis. The third method exploits a MUSIC algorithm, one of the numerous subspace-based methods, which is well adapted to estimate spectra composed of pure lines. A statistical study of performances of this method is presented, and points out the very good resolution of this estimator in comparison with Fourier-like techniques. Application to actual data confirms the good qualities of this estimator for reducing radar’s data.
Thekdi, Shital A; Santos, Joost R
Disruptive events such as natural disasters, loss or reduction of resources, work stoppages, and emergent conditions have potential to propagate economic losses across trade networks. In particular, disruptions to the operation of container port activity can be detrimental for international trade and commerce. Risk assessment should anticipate the impact of port operation disruptions with consideration of how priorities change due to uncertain scenarios and guide investments that are effective and feasible for implementation. Priorities for protective measures and continuity of operations planning must consider the economic impact of such disruptions across a variety of scenarios. This article introduces new performance metrics to characterize resiliency in interdependency modeling and also integrates scenario-based methods to measure economic sensitivity to sudden-onset disruptions. The methods will be demonstrated on a U.S. port responsible for handling $36.1 billion of cargo annually. The methods will be useful to port management, private industry supply chain planning, and transportation infrastructure management. PMID:26271771
Moreira, Pedro; Zemiti, Nabil; Liu, Chao; Poignet, Philippe
Controlling the interaction between robots and living soft tissues has become an important issue as the number of robotic systems inside the operating room increases. Many researches have been done on force control to help surgeons during medical procedures, such as physiological motion compensation and tele-operation systems with haptic feedback. In order to increase the performance of such controllers, this work presents a novel force control scheme using Active Observer (AOB) based on a viscoelastic interaction model. The control scheme has shown to be stable through theoretical analysis and its performance was evaluated by in vitro experiments. In order to evaluate how the force control scheme behaves under the presence of physiological motion, experiments considering breathing and beating heart disturbances are presented. The proposed control scheme presented a stable behavior in both static and moving environment. The viscoelastic AOB presented a compensation ratio of 87% for the breathing motion and 79% for the beating heart motion. PMID:24612709
Full text of publication follows: The Nuclear Power Plants (NPPs) are always designed for the highest level of safety against postulated accidents which may be initiated due to internal or external causes. One of the external/internal causes, which may lead to accident in the reactor and its associated systems, is fire in certain vital areas of the plant. Conventionally, the fire containment approach and/or the fire confinement approach is used in designing the fire protection systems of NPPs. Indian NPPs (PHWRs) follow the combined approach to ensure plant safety and all newly designed plants are required to comply with the provisions of Atomic Energy Regulatory Board (AERB) fire safety Guide. In respect of older plants, the reassessment of adequacy of fire safety provisions in the light of current advances has becomes essential so as to decide upon the steps for retrofitting. Keeping this in mind the deterministic fire hazard analysis was carried out for the Madras Atomic Power Station (MAPS). As a part of this exercise, detailed fire consequences analysis was required to be carried out for various critical areas. The choice of CFD based code was considered appropriate for these studies. A dedicated fire hazard analysis code Fire Dynamics Simulator (FDS) from NIST was used to perform these case studies. The code has option to use advanced fire models based on Large Eddy Simulation (LES) technique/ Direct Numerical Simulation (DNS) to model the fire-generated conditions. The LES option has been extensively used in the present studies which were primarily aimed at estimating the damage time for important safety related cable. Present paper describes the salient features of the methodology and important results for one of the most critical areas i.e. cable bridge area of MAPS. The typical dimensions of the cable bridge area are (length x breadth x height) of 12 m x 6 m x 2.5 m with an opening on one side of the cable bridge area. With almost equal gap, six numbers
缪淮扣; 陈圣波; 曾红卫
提出了基于模型的Web应用测试方法,包括建模、测试用例生成、测试用例的执行、模型以及测试用例的可视化等关键技术.设计并实现一个基于模型的Web应用测试系统.以FSM作为被测Web应用的形式测试模型,集成了模型转换器、测试目标分析器、测试序列生成器、FSM和测试序列可视化以及Web应用测试执行引擎等工具.除支持状态覆盖、迁移覆盖、迁移对覆盖等传统的覆盖准则外,还改进/提出了优化状态迁移覆盖、完整消息传递覆盖、完整功能交互覆盖和功能循环交互覆盖等覆盖准则.该文以兴宁水库移民信息管理系统为例演示了该系统.%In this paper, a testing approach to model-based testing for Web applications is proposed which involves in Web application modeling, test generation, test execution and visualization for Web models and test sequences. The authors design and implement a model-based testing system for Web applications while the FSM is regarded as a formal testing model of Web applications under test. And this system integrates Model Transformer, Test Purposes Analyzer, Test Sequences Generator, Visualization tools for FSM and test sequences, Test Execution Engine,etc. Furthermore, the system not only supports the traditional test coverage criteria such as State Coverage, Transition Coverage, Transition Pair Coverage, but also the criteria proposed and improved including Optimized State and Transition Coverage, Complete Message Pass Coverage,Complete Function Interaction Coverage and Function Loop Interaction Coverage. Finally, the authors demonstrate the system taking the Xingning Reservoir Resettlement MIS as our Web application under test.
Chunying Zhang; Sun Chen; Fang Wu; Kai Song
To overcome the large time-delay in measuring the hardness of mixed rubber, rheological parameters were used to predict the hardness. A novel Q-based model updating strategy was proposed as a universal platform to track time-varying properties. Using a few selected support samples to update the model, the strategy could dramat-ical y save the storage cost and overcome the adverse influence of low signal-to-noise ratio samples. Moreover, it could be applied to any statistical process monitoring system without drastic changes to them, which is practical for industrial practices. As examples, the Q-based strategy was integrated with three popular algorithms (partial least squares (PLS), recursive PLS (RPLS), and kernel PLS (KPLS)) to form novel regression ones, QPLS, QRPLS and QKPLS, respectively. The applications for predicting mixed rubber hardness on a large-scale tire plant in east China prove the theoretical considerations.
Chen, Jian Lin; Steele, Terry W J; Stuckey, David C
The sensitivity of anaerobic digestion metabolism to a wide range of solutes makes it important to be able to monitor toxicants in the feed to anaerobic digesters to optimize their operation. In this study, a rapid fluorescence measurement technique based on resazurin reduction using a microplate reader was developed and applied for the detection of toxicants and/or inhibitors to digesters. A kinetic model was developed to describe the process of resazurin reduced to resorufin, and eventually to dihydroresorufin under anaerobic conditions. By modeling the assay results of resazurin (0.05, 0.1, 0.2, and 0.4 mM) reduction by a pure facultative anaerobic strain, Enterococcus faecalis, and fresh mixed anaerobic sludge, with or without 10 mg L(-1) spiked pentachlorophenol (PCP), we found it was clear that the pseudo-first-order rate constant for the reduction of resazurin to resorufin, k1, was a good measure of "toxicity". With lower biomass density and the optimal resazurin addition (0.1 mM), the toxicity of 10 mg L(-1) PCP for E. faecalis and fresh anaerobic sludge was detected in 10 min. By using this model, the toxicity differences among seven chlorophenols to E. faecalis and fresh mixed anaerobic sludge were elucidated within 30 min. The toxicity differences determined by this assay were comparable to toxicity sequences of various chlorophenols reported in the literature. These results suggest that the assay developed in this study not only can quickly detect toxicants for anaerobic digestion but also can efficiently detect the toxicity differences among a variety of similar toxicants. PMID:26457928
Full text of publication follows: In the event of a severe accident in a Pressurized Water Reactor, corium, a mixture of molten materials issued from the fuel, cladding and structural elements, appears in the reactor core. One of the scenario of the severe accidents assumes that corium melts through the reactor pressure vessel and spreads over the concrete basemat of the reactor pit. The main question that has to be addressed in these scenario is whether and when the corium will make one's way through the basemat since it would induce groundwater contamination for example. The general approach used in this work is based on the 'segregation phase' model developed by CEA. The solid phase is located at the corium pool boundaries as a solid crust composed of refractory oxides, whereas the corium pool contains no solid. The interfacial temperature between the crust and the pool is the liquidus temperature calculated with the composition of the pool. Thermal-hydraulics (mass and energy balances) is then coupled with physico-chemistry (liquidus temperature, crust composition, chemical reactions). The TOLBIAC-ICB code is developed in the frame of an agreement with EDF, in order to simulate MCCI with the phase segregation model. It is coupled with the GEMINI code for the determination of the physico-chemistry variables. The main purpose of this paper is to present the modelling used in TOLBIAC-ICB and some validation calculations using the data of prototypic experiments available in the literature. Part of the attention focuses on material effects highlighted in some tests and reproduced in the numerical simulations. (authors)
He, ZuYong; Mei, Gui; Zhao, ChunPeng; Chen, YaoSheng
Engineered sequence-specific zinc finger nucleases (ZFNs) make the highly efficient modification of eukaryotic genomes possible. However, most current strategies for developing zinc finger nucleases with customized sequence specificities require the construction of numerous tandem arrays of zinc finger proteins (ZFPs), and subsequent largescale in vitro validation of their DNA binding affinities and specificities via bacterial selection. The labor and expertise required in this complex process limits the broad adoption of ZFN technology. An effective computational assisted design strategy will lower the complexity of the production of a pair of functional ZFNs. Here we used the FoldX force field to build 3D models of 420 ZFP-DNA complexes based on zinc finger arrays developed by the Zinc Finger Consortium using OPEN (oligomerized pool engineering). Using nonlinear and linear regression analysis, we found that the calculated protein-DNA binding energy in a modeled ZFP-DNA complex strongly correlates to the failure rate of the zinc finger array to show significant ZFN activity in human cells. In our models, less than 5% of the three-finger arrays with calculated protein-DNA binding energies lower than -13.132 kcal mol(-1) fail to form active ZFNs in human cells. By contrast, for arrays with calculated protein-DNA binding energies higher than -5 kcal mol(-1), as many as 40% lacked ZFN activity in human cells. Therefore, we suggest that the FoldX force field can be useful in reducing the failure rate and increasing efficiency in the design of ZFNs. PMID:21455692
Fwu, Peter Tramyeon
The medical image is very complex by its nature. Modeling built upon the medical image is challenging due to the lack of analytical solution. Finite element method (FEM) is a numerical technique which can be used to solve the partial differential equations. It utilized the transformation from a continuous domain into solvable discrete sub-domains. In three-dimensional space, FEM has the capability dealing with complicated structure and heterogeneous interior. That makes FEM an ideal tool to approach the medical-image based modeling problems. In this study, I will address the three modeling in (1) photon transport inside the human breast by implanting the radiative transfer equation to simulate the diffuse optical spectroscopy imaging (DOSI) in order to measurement the percent density (PD), which has been proven as a cancer risk factor in mammography. Our goal is to use MRI as the ground truth to optimize the DOSI scanning protocol to get a consistent measurement of PD. Our result shows DOSI measurement is position and depth dependent and proper scanning scheme and body configuration are needed; (2) heat flow in the prostate by implementing the Penne's bioheat equation to evaluate the cooling performance of regional hypothermia during the robot assisted radical prostatectomy for the individual patient in order to achieve the optimal cooling setting. Four factors are taken into account during the simulation: blood abundance, artery perfusion, cooling balloon temperature, and the anatomical distance. The result shows that blood abundance, prostate size, and anatomical distance are significant factors to the equilibrium temperature of neurovascular bundle; (3) shape analysis in hippocampus by using the radial distance mapping, and two registration methods to find the correlation between sub-regional change to the age and cognition performance, which might not reveal in the volumetric analysis. The result gives a fundamental knowledge of normal distribution in young
Samolyuk, G D; Béland, L K; Stocks, G M; Stoller, R E
Energy transfer between lattice atoms and electrons is an important channel of energy dissipation during displacement cascade evolution in irradiated materials. On the assumption of small atomic displacements, the intensity of this transfer is controlled by the strength of electron-phonon (el-ph) coupling. The el-ph coupling in concentrated Ni-based alloys was calculated using electronic structure results obtained within the coherent potential approximation. It was found that Ni0.5Fe0.5, Ni0.5Co0.5 and Ni0.5Pd0.5 are ordered ferromagnetically, whereas Ni0.5Cr0.5 is nonmagnetic. Since the magnetism in these alloys has a Stoner-type origin, the magnetic ordering is accompanied by a decrease of electronic density of states at the Fermi level, which in turn reduces the el-ph coupling. Thus, the el-ph coupling values for all alloys are approximately 50% smaller in the magnetic state than for the same alloy in a nonmagnetic state. As the temperature increases, the calculated coupling initially increases. After passing the Curie temperature, the coupling decreases. The rate of decrease is controlled by the shape of the density of states above the Fermi level. Introducing a two-temperature model based on these parameters in 10 keV molecular dynamics cascade simulation increases defect production by 10-20% in the alloys under consideration. PMID:27033732
Energy transfer between lattice atoms and electrons is an important channel of energy dissipation during displacement cascade evolution in irradiated materials. On the assumption of small atomic displacements, the intensity of this transfer is controlled by the strength of electron–phonon (el–ph) coupling. The el–ph coupling in concentrated Ni-based alloys was calculated using electronic structure results obtained within the coherent potential approximation. It was found that Ni0.5Fe0.5, Ni0.5Co0.5 and Ni0.5Pd0.5 are ordered ferromagnetically, whereas Ni0.5Cr0.5 is nonmagnetic. Since the magnetism in these alloys has a Stoner-type origin, the magnetic ordering is accompanied by a decrease of electronic density of states at the Fermi level, which in turn reduces the el–ph coupling. Thus, the el–ph coupling values for all alloys are approximately 50% smaller in the magnetic state than for the same alloy in a nonmagnetic state. As the temperature increases, the calculated coupling initially increases. After passing the Curie temperature, the coupling decreases. The rate of decrease is controlled by the shape of the density of states above the Fermi level. Introducing a two-temperature model based on these parameters in 10 keV molecular dynamics cascade simulation increases defect production by 10–20% in the alloys under consideration. (paper)
Samolyuk, G. D.; Béland, L. K.; Stocks, G. M.; Stoller, R. E.
Energy transfer between lattice atoms and electrons is an important channel of energy dissipation during displacement cascade evolution in irradiated materials. On the assumption of small atomic displacements, the intensity of this transfer is controlled by the strength of electron–phonon (el–ph) coupling. The el–ph coupling in concentrated Ni-based alloys was calculated using electronic structure results obtained within the coherent potential approximation. It was found that Ni0.5Fe0.5, Ni0.5Co0.5 and Ni0.5Pd0.5 are ordered ferromagnetically, whereas Ni0.5Cr0.5 is nonmagnetic. Since the magnetism in these alloys has a Stoner-type origin, the magnetic ordering is accompanied by a decrease of electronic density of states at the Fermi level, which in turn reduces the el–ph coupling. Thus, the el–ph coupling values for all alloys are approximately 50% smaller in the magnetic state than for the same alloy in a nonmagnetic state. As the temperature increases, the calculated coupling initially increases. After passing the Curie temperature, the coupling decreases. The rate of decrease is controlled by the shape of the density of states above the Fermi level. Introducing a two-temperature model based on these parameters in 10 keV molecular dynamics cascade simulation increases defect production by 10–20% in the alloys under consideration.
The irradiation of materials and products 'off carrier' has historically been performed using a 'drop-and-read' methodology whereby the radioisotope source is raised and lowered repeatedly until the desired absorbed dose is achieved. This approach is time consuming from both a manpower and process perspective. Static irradiation-based processes can also be costly because of the need for repeated experimental verification of target dose delivery. In our paper we address the methods used for predicting Ethicon Endo Surgery's (EES's) off-carrier absorbed dose distributions. The scenarios described herein are complex due to the fact that the on-carrier process stream exhibits a wide range of densities and dose rates. The levels of observed complexity are attributed to the 'just-in-time' production strategy and its related requirements as they apply to the programming of EES's cobalt-60 irradiators. Validation of off-carrier processing methodologies requires sophisticated parametric-based systems utilizing mathematical algorithms that predict off-carrier absorbed dose rate relative to the on-carrier process stream components. Irradiation process simulation is achieved using a point kernel computer modeling approach, coupled with database generation and maintenance. Dose prediction capabilities are validated via routine and transfer standard dosimetry
Full Text Available Nursing practice is comprised of knowledge, theory, and research . Because of its impact on the profession, the appraisal of research evidence is critically important. Future nursing professionals must be introduced to the purpose and utility of nursing research, as early exposure provides an opportunity to embed evidence-based practice (EBP into clinical experiences. The AACN requires baccalaureate education to include an understanding of the research process to integrate reliable evidence to inform practice and enhance clinical judgments . Although the importance of these knowledge competencies are evident to healthcare administrators and nursing leaders within the field, undergraduate students at the institution under study sometimes have difficulty understanding the relevance of nursing research to the baccalaureate prepared nurse, and struggle to grasp advanced concepts of qualitative and quantitative research design and methodologies. As undergraduate nursing students generally have not demonstrated an understanding of the relationship between theoretical concepts found within the undergraduate nursing curriculum and the practical application of these concepts in the clinical setting, the research team decided to adopt an effective pedagogical active learning strategy, team-based learning (TBL. Team-based learning shifts the traditional course design to focus on higher thinking skills to integrate desired knowledge . The purpose of this paper is to discuss the impact of course design with the integration of TBL in an undergraduate nursing research course on increasing higher order thinking.  American Association of Colleges of Nursing, The Essentials of Baccalaureate Education for Professional Nursing Practice, Washington, DC: American Association of Colleges of Nursing, 2008.  B. Bloom, Taxonomy of Educational Objectives, Handbook I: Cognitive Domain, New York: McKay, 1956.
Ohlberger, Mario; Smetana, Kathrin
In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.
Forsberg, Ronald; Christiansen, Freddy Bugge
Parasites sometimes expand their host range by acquiring a new host species. Following a host change event, the selective regime acting on a given parasite gene may change due to host-specific adaptive alterations of protein functionality or host-specific immune-mediated selection. We present a...... codon-based model that attempts to include these effects by allowing the position-specific substitution process to change in conjunction with a host change event. Following maximum-likelihood parameter estimation, we employ an empirical Bayesian procedure to identify candidate sites, potentially...... involved in hostspecific adaptation. We discuss the applicability of the model to the more general problem of ascertaining whether the selective regime differs between two groups of related organisms. The utility of the model is illustrated on a dataset of nucleoprotein sequences from the influenza A virus...
Zverlov, Sergey; Khalil, Maged; Chaudhary, Mayank
International audience Increasingly complex functionality in automotive applications demand more and more computing power. As room for computing units in modern vehicles dwindles, centralized ar-chitectures-with larger, more powerful processing units-are the trend. With this trend, applications no longer run on dedicated hardware, but share the same computing resources with others on the centralized platform. Ascertaining efficient deployment and scheduling for co-located applications is c...
Jackisch, Conrad; van Schaik, Loes; Graeff, Thomas; Zehe, Erwin
Preferential flow through macropores often determines hydrological characteristics - especially regarding runoff generation and fast transport of solutes. Macropore settings may yet be very different in nature and dynamics, depending on their origin. While biogenic structures follow activity cycles (e.g. earth worms) and population conditions (e.g. roots), pedogenic and geogenic structures may depend on water stress (e.g. cracks) or large events (e.g. flushed voids between skeleton and soil pipes) or simply persist (e.g. bedrock interface). On the one hand, such dynamic site characteristics can be observed in seasonal changes in its reaction to precipitation. On the other hand, sprinkling experiments accompanied by tracers or time-lapse 3D Ground-Penetrating-Radar are suitable tools to determine infiltration patterns and macropore configuration. However, model representation of the macropore-matrix system is still problematic, because models either rely on effective parameters (assuming well-mixed state) or on explicit advection strongly simplifying or neglecting interaction with the diffusive flow domain. Motivated by the dynamic nature of macropores, we present a novel model approach for interacting diffusive and advective water, solutes and energy transport in structured soils. It solely relies on scale- and process-aware observables. A representative set of macropores (data from sprinkling experiments) determines the process model scale through 1D advective domains. These are connected to a 2D matrix domain which is defined by pedo-physical retention properties. Water is represented as particles. Diffusive flow is governed by a 2D random walk of these particles while advection may take place in the macropore domain. Macropore-matrix interaction is computed as dissipation of the advective momentum of a particle by its experienced drag from the matrix domain. Through a representation of matrix and macropores as connected diffusive and advective domains for water
Mergili, M.; Marchesini, I.; Fellin, W.; Rossi, M.; Raia, S.; Guzzetti, F.
Landslide risk depends on landslide hazard, i.e. the probability of occurrence of a slope failure of a given magnitude within a specified period and in a given area. The occurrence probability of slope failures in an area characterized by a set of geo-environmental parameters gives the landslide susceptibility. Statistical and deterministic methods are used to assess landslide susceptibility. Deterministic models based on limit equilibrium techniques are applied for the analysis of particular types of landslides (e.g., shallow soil slips, debris flows, rock falls), or to investigate the effects of specific triggers, i.e., an intense rainfall event or an earthquake. In particular, infinite slope stability models are used to calculate the spatial probability of shallow slope failures. In these models, the factor of safety is computed on a pixel basis, assuming a slope-parallel, infinite slip surface. Since shallow slope failures coexist locally with deep-seated landslides, infinite slope stability models fail to describe the complexity of the landslide phenomena. Limit equilibrium models with curved sliding surfaces are geometrically more complex, and their implementation with raster-based GIS is a challenging task. Only few attempts were made to develop GIS-based three-dimensional applications of such methods. We present a preliminary implementation of a GIS-based, three-dimensional slope stability model capable of dealing with deep-seated and shallow rotational slope failures. The model is implemented as a raster module (r.rotstab) in the Open Source GIS package GRASS GIS, and makes use of the three-dimensional sliding surface model proposed by Hovland (1977). Given a DEM and a set of thematic layers of geotechnical and hydraulic parameters, the model tests a large number of randomly determined potential ellipsoidal slip surfaces. In addition to ellipsoidal slip surfaces, truncated ellipsoids are tested, which can occur in the presence of weak layers or hard
Zeni, Lorenzo; Margaris, Ioannis; Hansen, Anca Daniela; Sørensen, Poul Ejnar; Kjær, P.C.
This paper focuses on generic Type 4 wind turbine generators models, their applicability in modern HVDC connections and their capability to provide advanced ancillary services therefrom. A point-to-point HVDC offshore connection is considered. Issues concerning coordinated HVDC and wind farm...... control as well as the need of a communication link are discussed. Two possible control configurations are presented and compared. The first is based on a communication link transmitting the onshore frequency directly to the wind power plant, while the second makes use of a coordinated control scheme...
This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed and sel...
Pernechele, Claudio; Villa, Federica A.
Panoramic omnidirectional lenses have the typical draw-back effect to obscure the frontal view, producing the classic "donut-shape" image in the focal plane. We realized a panoramic lens in which the frontal field is make available to be imaged in the focal plane together with the panoramic field, producing a FoV of 360° in azimuth and 270° in elevation; it have then the capabilities of a fish eye plus those of a panoramic lens: we call it hyper-hemispheric lens. We built and test an all-spherical hyper-hemispheric lens. The all-spherical configuration suffer for the typical issues of all ultra wide angle lenses: there is a large distortion at high view angles. The fundamental origin of the optical problems resides on the fact that chief rays angles on the object side are not preserved passing through the optics preceding the aperture stop (fore-optics). This effect produce an image distortion on the focal plane, with the focal length changing along the elevation angles. Moreover, the entrance pupil is shifting at large angle, where the paraxial approximation is not more valid, and tracing the rays appropriately require some effort to the optical designer. It has to be noted here as the distortion is not a source-point-aberrations: it is present also in well corrected optical lenses. Image distortion may be partially corrected using aspheric surface. We describe here how we correct it for our original hyper-hemispheric lens by designing an aspheric surface within the optical train and optimized for a Single Photon Avalanche Diode (SPAD) array-based imaging applications.
Bonanne, Kevin H.
Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.
Bugaets, Andrey; Gonchukov, Leonid
Intake of deterministic distributed hydrological models into operational water management requires intensive collection and inputting of spatial distributed climatic information in a timely manner that is both time consuming and laborious. The lead time of the data pre-processing stage could be essentially reduced by coupling of hydrological and numerical weather prediction models. This is especially important for the regions such as the South of the Russian Far East where its geographical position combined with a monsoon climate affected by typhoons and extreme heavy rains caused rapid rising of the mountain rivers water level and led to the flash flooding and enormous damage. The objective of this study is development of end-to-end workflow that executes, in a loosely coupled mode, an integrated modeling system comprised of Weather Research and Forecast (WRF) atmospheric model and Soil and Water Assessment Tool (SWAT 2012) hydrological model using OpenMI 2.0 and web-service technologies. Migration SWAT into OpenMI compliant involves reorganization of the model into a separate initialization, performing timestep and finalization functions that can be accessed from outside. To save SWAT normal behavior, the source code was separated from OpenMI-specific implementation into the static library. Modified code was assembled into dynamic library and wrapped into C# class implemented the OpenMI ILinkableComponent interface. Development of WRF OpenMI-compliant component based on the idea of the wrapping web-service clients into a linkable component and seamlessly access to output netCDF files without actual models connection. The weather state variables (precipitation, wind, solar radiation, air temperature and relative humidity) are processed by automatic input selection algorithm to single out the most relevant values used by SWAT model to yield climatic data at the subbasin scale. Spatial interpolation between the WRF regular grid and SWAT subbasins centroid (which are
Since the first Web page went online in 1990, the rapid development of the World Wide Web has been continuously influencing the way we manage and acquire personal or corporate information. Nowadays, the WWW is much more than a static information medium, which it was at the beginning. The expressive power of modern programming languages is at the full disposal of Web application developers, who continue to surprise Web users with innovative applications having feature sets and complexity, whic...
J. O. Kaplan
This thesis describes the development and selected applications of a global vegetation model, BIOME4. The model is applied to problems in high-latitude vegetation distribution and climate, trace gas production, and isotope biogeochemistry. It demonstrates how a modeling approach, based on principles of plant physiology and ecology, can be applied to interdisciplinary problems that cannot be adequately addressed by direct observations or experiments. The work is relevant to understanding the p...
In 2000 we proposed a sociophysics model of opinion formation, which was based on trade union maxim ''United we Stand, Divided we Fall'' (USDF) and latter due to Dietrich Stauffer became known as the Sznajd model (SM). The main difference between SM compared to voter or Ising-type models is that information flows outward. In this paper we review the modifications and applications of SM that have been proposed in the literature. (author)
In 2000 we proposed a sociophysics model of opinion formation, which was based on trade union maxim "United we Stand, Divided we Fall" (USDF) and latter due to Dietrich Stauffer became known as the Sznajd model (SM). The main difference between SM compared to voter or Ising-type models is that information flows outward. In this paper we review the modifications and applications of SM that have been proposed in the literature.
JIA Ming; YANG Gongliu
Optical fiber coil winding model is used to guide proper and high precision coil winding for fiber optic gyroscope(FOG)application.Based on the large-deformation theory of elasticity,stress analysis of optical fiber free end has been made and balance equation of infinitesimal fiber is deduced,then deformation equation is derived by substituting terminal conditions.On condition that only axial tensile force exists,appmximate curve equation has been built in small angle deformation scope.The comparison of tangent point longitudinal coordinate result between theory and approximation gives constant of integration,and expression with tangent point as origin of coordinate is readjusted.Analyzing the winding parameters of an example,it is clear that the horizontal distance from the highest point of wheel to fiber tangent point has millimeter order of magnitude and significant difference with fiber tension variation,and maintains invariant when wheel radius changes.The height of tensioo and accurate position of tangent point are defined for proper fiber guide.For application to fiber optic gyroscope,spiral-disc winding method and nonideal deformation of straddle section are analyzed,and then spiral-disc quadrupole pattern winding method has been introduced and realized by winding system.The winding results approve that the winding model is applicable.
Etemadfard, Hossein; Mashhadi Hossainali, Masoud
Due to significant energy resources in polar regions, they have emerged as strategic parts of the world. Consequently, various researches have been funded in order to study these areas in further details. This research intends to improve the accuracy of spherical harmonic (SH)-based Global Ionospheric Models (GIMs) by reconstructing a new map of ionosphere in the Arctic region. For this purpose, the spatiospectral concentration is applied to optimize the base functions. It is carried out using the Slepian theory which was developed by Simons. Here the new base functions and the corresponding coefficients are derived from the SH models for the polar regions. Then, VTEC (vertical total electron content) is reconstructed using Slepian functions and the new coefficients. Reconstructed VTECs and the VTECs derived from SH models are compared to the estimates of this parameter, which are directly derived from dual-frequency GPS measurements. Three International Global Navigation Satellite Systems Service stations located in the northern polar region have been used for this purpose. The starting and ending day of year of adopted GPS data are 69 and 83, respectively, (totally 15 successive days) of the year 2013. According to the obtained results, on average, application of Slepian theory can improve accuracy of the GIM by 1 to 2 total electron content unit (TECU) (1 TECU = 1016 el m-2) in the Arctic region.
Zeng, Fanchun; Zhang, Bin; Zhang, Lu; Ji, Jinfu; Jin, Wenjing
An improved T-S model was introduced to identify the model of SCR system. Model structure was selected by physical analyzes and mathematics tests. Three different clustering algorithms were introduced to obtain space partitions. Then, space partitions were amended by mathematics methods. At last, model parameters were identified by least square method. Train data was sampled in 1000MW coal-fired unit SCR system. T-S model of it is identified by three cluster methods. Identify results are proved effective. The merit and demerit among them are analyzed in the end.
CHEN Yue-hua; CAO Guang-yi; ZHU Xin-jian
This paper described a nonlinear model predictive controller for regulating a molten carbonate fuel cell (MCFC). A detailed mechanism model of output voltage of a MCFC was presented at first. However, this model was too complicated to be used in a control system. Consequently, an off line radial basis function (RBF) network was introduced to build a nonlinear predictive model. And then, the optimal control sequences were obtained by applying golden mean method. The models and controller have been realized in the MATLAB environment. Simulation results indicate the proposed algorithm exhibits satisfying control effect even when the current densities vary largely.
De Mulder, Wim; Grow, André; Molenberghs, Geert; Verbeke, Geert
We describe the application of statistical emulation to the outcomes of an agent-based model. The agent-based model simulates the mechanisms that might have linked the reversal of gender inequality in higher education with observed changes in educational assortative mating in Belgium. Using the statistical emulator as a computationally fast approximation to the expensive agent-based model, it is feasible to use a genetic algorithm in nding the parameter values for which the correspondin...
Vilémová, Monika; Matějíček, Jiří; Mušálek, Radek; Nohava, J.
Roč. 21, 3-4 (2012), s. 372-382. ISSN 1059-9630 R&D Projects: GA MŠk ME 901 Institutional research plan: CEZ:AV0Z20430508 Keywords : analytical model * elastic modulus * finite element modeling * image analysis * modeling of properties * thermal conductivity * water stabilized plasma Subject RIV: JK - Corrosion ; Surface Treatment of Materials Impact factor: 1.481, year: 2012 http://www.springerlink.com/content/3m24812367315142/fulltext.pdf
Highlights: • Methodological framework for obtaining Robust Unit Commitment (UC) policies. • Wind-power forecast using a revisited bootstrap predictive inference approach. • Novel scenario-based model for wind-power uncertainty. • Efficient modeling framework for obtaining nearly optimal UC policies in reasonable time. • Effective incorporation of wind-power uncertainty in the UC modeling. - Abstract: The complex processes involved in the determination of the availability of power from renewable energy sources, such as wind power, impose great challenges in the forecasting processes carried out by transmission system operators (TSOs). Nowadays, many of these TSOs use operation planning tools that take into account the uncertainty of the wind-power. However, most of these methods typically require strict assumptions about the probabilistic behavior of the forecast error, and usually ignore the dynamic nature of the forecasting process. In this paper a methodological framework to obtain Robust Unit Commitment (UC) policies is presented; such methodology considers a novel scenario-based uncertainty model for wind power applications. The proposed method is composed by three main phases. The first two phases generate a sound wind-power forecast using a bootstrap predictive inference approach. The third phase corresponds to modeling and solving a one-day ahead Robust UC considering the output of the first phase. The performance of proposed approach is evaluated using as case study a new wind farm to be incorporated into the Northern Interconnected System (NIS) of Chile. A projection of wind-based power installation, as well as different characteristic of the uncertain data, are considered in this study
Full Text Available The human histamine H4 receptor (hH4R, a member of the G-protein coupled receptors (GPCR family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE and Iterative Stochastic Elimination (ISE approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and
A consequence of environmental model complexity is that the task of understanding how environmental models work and identifying their sensitivities/uncertainties, etc. becomes progressively more difficult. Comprehensive numerical and visual evaluation tools have been developed such as the Monte Carl...
Berhane, Kiros; Molitor, Nuoo-Ting
Flexible multilevel models are proposed to allow for cluster-specific smooth estimation of growth curves in a mixed-effects modeling format that includes subject-specific random effects on the growth parameters. Attention is then focused on models that examine between-cluster comparisons of the effects of an ecologic covariate of interest (e.g. air pollution) on nonlinear functionals of growth curves (e.g. maximum rate of growth). A Gibbs sampling approach is used to get posterior mean estimates of nonlinear functionals along with their uncertainty estimates. A second-stage ecologic random-effects model is used to examine the association between a covariate of interest (e.g. air pollution) and the nonlinear functionals. A unified estimation procedure is presented along with its computational and theoretical details. The models are motivated by, and illustrated with, lung function and air pollution data from the Southern California Children's Health Study. PMID:18349036
A new twin screw compressor has been developed by SRM (Svenska Rotor Maskiner) for use in a new high temperature heat pump using water as refrigerant. This article presents a mathematical model of the thermodynamic process of compression in twin screw compressors. Using a special discretization method, a transient twin screw compressor model has been developed using Modelica in order to study the dry compression cycle of this machine at high temperature levels. The pressure and enthalpy evolution in the control volumes of the model are calculated as a function of the rotational angle of the male rotor using energy and continuity equations. In addition, associated processes encountered in real machines such as variable fluid leakages, water injection and heat losses are modeled and implemented in the main compressor model. A comparison is performed using the model developed, demonstrating the behavior of the compressor and the evolution of its different parameters in different configurations with and without water injection. This comparison shows the need for water injection to avoid compressor failure and improve its efficiency. -- Highlights: • Difficulties related to the compressor limit the development of a high temperature heat pump using water as refrigerant. • A new water vapor double screw compressor has been developed to overcome compression problems. • A dynamic model of this compressor has been developed and simulated using Modelica. • The behavior of the compressor has been identified all along the compression cycle and efficiencies have been calculated
Berhane, Kiros; Molitor, Nuoo-Ting
Flexible multilevel models are proposed to allow for cluster-specific smooth estimation of growth curves in a mixed-effects modeling format that includes subject-specific random effects on the growth parameters. Attention is then focused on models that examine between-cluster comparisons of the effects of an ecologic covariate of interest (e.g. air pollution) on nonlinear functionals of growth curves (e.g. maximum rate of growth). A Gibbs sampling approach is used to get posterior mean estima...
Biofilm is a dominant form of existence for bacteria in most natural and synthetic environments. Depending on the application area, they can be useful or harmful. They have a helpful influence in bioremediation, microbial enhanced oil recovery, and metal extraction. On the other hand, biofilms are damaging for water pipes, heat exchangers, submarines and body organs. Formation of biofilm within a porous matrix reduces the pore size and total empty space of the system, altering the porosity an...
Babankumar S. Bansod; OP Pandey
Starting from descriptive data on crop yield and various other properties, the aim of this study is to reveal the trends on soil behaviour, such as crop yield. This study has been carried out by developing web application that uses a well known technique- Cluster Analysis. The cluster analysis revealed linkages between soil classes for the same field as well as between different fields, which can be partly assigned to crops rotation and determination of variable soil input rates. A hybrid clu...
Clancey, William J.
This viewgraph presentation provides an overview of past and possible future applications for artifical intelligence (AI) in astronaut instruction and training. AI systems have been used in training simulation for the Hubble Space Telescope repair, the International Space Station, and operations simulation for the Mars Exploration Rovers. In the future, robots such as may work as partners with astronauts on missions such as planetary exploration and extravehicular activities.
Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul
development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the...
QIAN Li; CHEN Wen-yuan; XU Guo-ping
A new scheme is proposed to model 3D angular motion of a revolving regular object with miniature, low-cost micro electro mechanical systems (MEMS) accelerometers (instead of gyroscope), which is employed in 3D mouse system. To sense 3D angular motion, the static property of MEMS accelerometer, sensitive to gravity acceleration, is exploited. With the three outputs of configured accelerometers, the proposed model is implemented to get the rotary motion of the rigid object. In order to validate the effectiveness of the proposed model, an input device is developed with the configuration of the scheme. Experimental results show that a simulated 3D cube can accurately track the rotation of the input device. The result indicates the feasibility and effectiveness of the proposed model in the 3D mouse system.
Full Text Available In this research, manage the Internal Combustion (IC engine modeling and a multi-input-multi-output artificial intelligence baseline chattering free sliding mode methodology scheme is developed with guaranteed stability to simultaneously control fuel ratios to desired levels under various air flow disturbances by regulating the mass flow rates of engine PFI and DI injection systems. Modeling of an entire IC engine is a very important and complicated process because engines are nonlinear, multi inputs-multi outputs and time variant. One purpose of accurate modeling is to save development costs of real engines and minimizing the risks of damaging an engine when validating controller designs. Nevertheless, developing a small model, for specific controller design purposes, can be done and then validated on a larger, more complicated model. Analytical dynamic nonlinear modeling of internal combustion engine is carried out using elegant Euler-Lagrange method compromising accuracy and complexity. A baseline estimator with varying parameter gain is designed with guaranteed stability to allow implementation of the proposed state feedback sliding mode methodology into a MATLAB simulation environment, where the sliding mode strategy is implemented into a model engine control module (“software”. To estimate the dynamic model of IC engine fuzzy inference engine is applied to baseline sliding mode methodology. The fuzzy inference baseline sliding methodology performance was compared with a well-tuned baseline multi-loop PID controller through MATLAB simulations and showed improvements, where MATLAB simulations were conducted to validate the feasibility of utilizing the developed controller and state estimator for automotive engines. The proposed tracking method is designed to optimally track the desired FR by minimizing the error between the trapped in-cylinder mass and the product of the desired FR and fuel mass over a given time interval.
Recently, Artificial Neural Networks (ANN), which is mathematical modelingtools inspired by the properties of the biological neural system, has been typically used inthe studies of hydrological time series modeling. These modeling studies generally includethe standart feed forward backpropagation (FFBP) algorithms such as gradient-descent,gradient-descent with momentum rate and, conjugate gradient etc. As the standart FFBPalgorithms have some disadvantages relating to the time requirement and...
Kidane, A.; Lashgari, A.; Li, B.; McKerns, M.; Ortiz, M.; Owhadi, H.; Ravichandran, G.; Stalzer, M.; Sullivan, T. J.
This work is concerned with establishing the feasibility of a data-on-demand (DoD) uncertainty quantification (UQ) protocol based on concentration-of-measure inequalities. Specific aims are to establish the feasibility of the protocol and its basic properties, including the tightness of the predictions afforded by the protocol. The assessment is based on an application to terminal ballistics and a specific system configuration consisting of 6061-T6 aluminum plates struck by spherical S-2 tool steel projectiles at ballistic impact speeds. The system's inputs are the plate thickness and impact velocity and the perforation area is chosen as the sole performance measure of the system. The objective of the UQ analysis is to certify the lethality of the projectile, i.e., that the projectile perforates the plate with high probability over a prespecified range of impact velocities and plate thicknesses. The net outcome of the UQ analysis is an M/U ratio, or confidence factor, of 2.93, indicative of a small probability of no perforation of the plate over its entire operating range. The high-confidence (>99.9%) in the successful operation of the system afforded the analysis and the small number of tests (40) required for the determination of the modeling-error diameter, establishes the feasibility of the DoD UQ protocol as a rigorous yet practical approach for model-based certification of complex systems.
Le Meur, Olivier
This manuscript which constitutes a synthesis document of my research in preparation for my Habilitation degree (Habilitation à Diriger des Recherches) presents the most important outcomes of my research. Since my PhD degree in September 2005, I have been working on two main research themes which are the visual attention and saliency-based image editing. Before delving into the details of my research, a brief presentation of the visual attention and saliency-based image editing is made.Visual...
Accurate and fast estimation of state of charge and health during operations plays pivotal roles in prevention of overcharge or undercharge and accurately monitor the state of cells degradation, which requires a model that can be embedded in the battery management system. Currently, models used are based on either empirical equations or electric equivalent circuit components with voltage sources or a combination of the two. The models are relatively simple, but limited to represent a wide range of operating behaviors that includes the effects of temperature, state of charge (SOC) and degradation, to name a few. On the other hand, full order models (FOM) are multi-dimensional or multi-scale models based on electrochemical and thermal principles capable of representing the details of the cell behavior, but inadequate for real time applications, simply because of the high computational time. Therefore, there are needs for the development of a model with an intermediate performance and real time capability, which is accomplished by reduction of the FOM that is called a reduced order model (ROM). The battery used for development of the ROM is a pouch type of lithium ion polymer battery (LiPB) made of LiMn2O4 (LMO)/carbon. The reduction of the model was carried out for ion concentrations, potentials and kinetics, the ion concentration in the electrode using the polynomial approach and the ion concentration in the electrolyte using the state space method, and potentials and electrochemical kinetics by linearization. In addition, the energy equation is used to calculate the cell temperature, on which the diffusion coefficient and the solid electrolyte interphase (SEI) resistance are dependent. The computational time step is determined based on the total computational time and errors at a given SOC range and different C-rates. ROM responses are compared with those of the FOM and experimental data at a single cycle and multiple cycles under different operating conditions
This paper applies real options theory to overseas oil investment by adding an investment-environment factor to oil-resource valuation. A real options model is developed to illustrate how an investor country (or oil company) can evaluate and compare the critical value of oil-resource investment in different countries under oil-price, exchange-rate, and investment-environment uncertainties. The aim is to establish a broad model that can be used by every oil investor country to value overseas oil resources. The model developed here can match three key elements: 1) deal with overseas investment (the effects of investment environment and exchange rates); 2) deal with oil investment (oil price, production decline rate and development cost etc.); 3) the comparability of the results from different countries (different countries' oil-investment situation can be compared by using the option value index (OVI)). China's overseas oil investment is taken as an example to explain the model by calculating each oil-investee country's critical value per unit of oil reserves and examining the effect of different factors on the critical value. The results show that the model developed here can provide useful advice for China's overseas oil investment program. The research would probably also be helpful to other investor countries looking to invest in overseas oil resources. (author)
Qi Li; Cheng Shao
The composition of the distillation column is a very important quality value in refineries, unfortunately, few hardware sensors are available on-line to measure the distillation compositions. In this paper, a novel method using sensitivity matrix analysis and kernel ridge regression (KRR) to implement on-line soft sensing of distillation compositions is proposed. In this approach, the sensitivity matrix analysis is presented to select the most suitable secondary variables to be used as the soft sensor's input. The KRR is used to build the composition soft sensor. Application to a simulated distillation column demonstrates the effectiveness of the method.
In this contribution, a brief survey of some important nuclear reaction models that can be helpful for a global analysis of intermediate-energy nucleon-induced reactions has been presented. Essentially, there are two energy regions: the intranuclear cascade regime, where classical Monte Carlo methods are sufficient for a proper description of nuclear reactions and, for energies below about 150 MeV, the regime where more different specific approaches are required. Probably, the best overall picture is obtained if these two different approaches are employed as complementary tools in nuclear data evaluation. A more extensive comparison between the various models has been performed in a recent computer benchmark. (orig.)
This presentation discusses methods used to extrapolate from in vitro high-throughput screening (HTS) toxicity data for an endocrine pathway to in vivo for early life stages in humans, and the use of a life stage PBPK model to address rapidly changing physiological parameters. A...