WorldWideScience

Sample records for methods approach combining

  1. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  2. Prediction of a service demand using combined forecasting approach

    Science.gov (United States)

    Zhou, Ling

    2017-08-01

    Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.

  3. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain

    OpenAIRE

    Srivastava, Subodh; Sharma, Neeraj; Singh, S. K.; Srivastava, R.

    2014-01-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based uns...

  4. Combining accounting approaches to practice valuation.

    Science.gov (United States)

    Schwartzben, D; Finkler, S A

    1998-06-01

    Healthcare organizations that wish to acquire physician or ambulatory care practices can choose from a variety of practice valuation approaches. Basic accounting methods assess the value of a physician practice on the basis of a historical, balance-sheet description of tangible assets. Yet these methods alone are inadequate to determine the true financial value of a practice. By using a combination of accounting approaches to practice valuation that consider factors such as fair market value, opportunity cost, and discounted cash flow over a defined time period, organizations can more accurately assess a practice's actual value.

  5. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain.

    Science.gov (United States)

    Srivastava, Subodh; Sharma, Neeraj; Singh, S K; Srivastava, R

    2014-07-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based unsharp masking and crispening method is applied on the approximation coefficients of the wavelet transformed images to further highlight the abnormalities such as micro-calcifications, tumours, etc., to reduce the false positives (FPs). Thirdly, a modified fuzzy c-means (FCM) segmentation method is applied on the output of the second step. In the modified FCM method, the mutual information is proposed as a similarity measure in place of conventional Euclidian distance based dissimilarity measure for FCM segmentation. Finally, the inverse 2D-DWT is applied. The efficacy of the proposed unsharp masking and crispening method for image enhancement is evaluated in terms of signal-to-noise ratio (SNR) and that of the proposed segmentation method is evaluated in terms of random index (RI), global consistency error (GCE), and variation of information (VoI). The performance of the proposed segmentation approach is compared with the other commonly used segmentation approaches such as Otsu's thresholding, texture based, k-means, and FCM clustering as well as thresholding. From the obtained results, it is observed that the proposed segmentation approach performs better and takes lesser processing time in comparison to the standard FCM and other segmentation methods in consideration.

  6. Automated Extraction of Cranial Landmarks from Computed Tomography Data using a Combined Method of Knowledge and Pattern Based Approaches

    Directory of Open Access Journals (Sweden)

    Roshan N. RAJAPAKSE

    2016-03-01

    Full Text Available Accurate identification of anatomical structures from medical imaging data is a significant and critical function in the medical domain. Past studies in this context have mainly utilized two main approaches, the knowledge and learning methodologies based methods. Further, most of previous reported studies have focused on identification of landmarks from lateral X-ray Computed Tomography (CT data, particularly in the field of orthodontics. However, this study focused on extracting cranial landmarks from large sets of cross sectional CT slices using a combined method of the two aforementioned approaches. The proposed method of this study is centered mainly on template data sets, which were created using the actual contour patterns extracted from CT cases for each of the landmarks in consideration. Firstly, these templates were used to devise rules which are a characteristic of the knowledge based method. Secondly, the same template sets were employed to perform template matching related to the learning methodologies approach. The proposed method was tested on two landmarks, the Dorsum sellae and the Pterygoid plate, using CT cases of 5 subjects. The results indicate that, out of the 10 tests, the output images were within the expected range (desired accuracy in 7 instances and acceptable range (near accuracy for 2 instances, thus verifying the effectiveness of the combined template sets centric approach proposed in this study.

  7. Seismic data two-step recovery approach combining sparsity-promoting and hyperbolic Radon transform methods

    International Nuclear Information System (INIS)

    Wang, Hanchuang; Chen, Shengchang; Ren, Haoran; Liang, Donghui; Zhou, Huamin; She, Deping

    2015-01-01

    In current research of seismic data recovery problems, the sparsity-promoting method usually produces an insufficient recovery result at the locations of null traces. The HRT (hyperbolic Radon transform) method can be applied to problems of seismic data recovery with approximately hyperbolic events. Influenced by deviations of hyperbolic characteristics between real and ideal travel-time curves, some spurious events are usually introduced and the recovery effect of intermediate and far-offset traces is worse than that of near-offset traces. Sparsity-promoting recovery is primarily dependent on the sparsity of seismic data in the sparse transform domain (i.e. on the local waveform characteristics), whereas HRT recovery is severely affected by the global characteristics of the seismic events. Inspired by the above conclusion, a two-step recovery approach combining sparsity-promoting and time-invariant HRT methods is proposed, which is based on both local and global characteristics of the seismic data. Two implementation strategies are presented in detail, and the selection criteria of the relevant strategies is also discussed. Numerical examples of synthetic and real data verify that the new approach can achieve a better recovery effect by simultaneously overcoming the shortcomings of sparsity-promoting recovery and HRT recovery. (paper)

  8. [Combine fats products: methodic opportunities of it identification].

    Science.gov (United States)

    Viktorova, E V; Kulakova, S N; Mikhaĭlov, N A

    2006-01-01

    At present time very topical problem is falsification of milk fat. The number of methods was considered to detection of milk fat authention and possibilities his difference from combined fat products. The analysis of modern approaches to valuation of milk fat authention has showed that the main method for detection of fat nature is gas chromatography analysis. The computer method of express identification of fat products is proposed for quick getting of information about accessory of examine fat to nature milk or combined fat product.

  9. Weighting Performance Evaluation Criteria Base in Balanced Score Card Approach with Use of Combination Method Shapley value & Bull\\'s-eye

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Kamfiroozi

    2014-05-01

    Full Text Available Performance evaluation as a control tool was considered by managers in the organizations and manufactures. In this paper we decide to present a new model for performance evaluation and industrial companies ranking at uncertain conditions. Based on this, we implemented performance evaluation based on balance score card (BSC method. Beside, we tried to use three parameter interval grey numbers in lieu of linguistic variables. Then evaluation and weighting of fourth indicators is done with use of Bulls-eye-Shapley combination method that is counted as new approach in this article. Reason of utilization of three parameter interval grey numbers and combination method was decreasing of environmental uncertainty on data and model. This combination weighting method can be used as a new method in decision making Science. At final of this paper case study was implemented on industrial companies (nail makers that ranking of these companies is obtained by use of grey-TOPSIS method (that is a generalization of classic TOPSIS for three parameter interval grey numbers.

  10. A combined volume-of-fluid method and low-Mach-number approach for DNS of evaporating droplets in turbulence

    Science.gov (United States)

    Dodd, Michael; Ferrante, Antonino

    2017-11-01

    Our objective is to perform DNS of finite-size droplets that are evaporating in isotropic turbulence. This requires fully resolving the process of momentum, heat, and mass transfer between the droplets and surrounding gas. We developed a combined volume-of-fluid (VOF) method and low-Mach-number approach to simulate this flow. The two main novelties of the method are: (i) the VOF algorithm captures the motion of the liquid gas interface in the presence of mass transfer due to evaporation and condensation without requiring a projection step for the liquid velocity, and (ii) the low-Mach-number approach allows for local volume changes caused by phase change while the total volume of the liquid-gas system is constant. The method is verified against an analytical solution for a Stefan flow problem, and the D2 law is verified for a single droplet in quiescent gas. We also demonstrate the schemes robustness when performing DNS of an evaporating droplet in forced isotropic turbulence.

  11. An improved EMD method for modal identification and a combined static-dynamic method for damage detection

    Science.gov (United States)

    Yang, Jinping; Li, Peizhen; Yang, Youfa; Xu, Dian

    2018-04-01

    Empirical mode decomposition (EMD) is a highly adaptable signal processing method. However, the EMD approach has certain drawbacks, including distortions from end effects and mode mixing. In the present study, these two problems are addressed using an end extension method based on the support vector regression machine (SVRM) and a modal decomposition method based on the characteristics of the Hilbert transform. The algorithm includes two steps: using the SVRM, the time series data are extended at both endpoints to reduce the end effects, and then, a modified EMD method using the characteristics of the Hilbert transform is performed on the resulting signal to reduce mode mixing. A new combined static-dynamic method for identifying structural damage is presented. This method combines the static and dynamic information in an equilibrium equation that can be solved using the Moore-Penrose generalized matrix inverse. The combination method uses the differences in displacements of the structure with and without damage and variations in the modal force vector. Tests on a four-story, steel-frame structure were conducted to obtain static and dynamic responses of the structure. The modal parameters are identified using data from the dynamic tests and improved EMD method. The new method is shown to be more accurate and effective than the traditional EMD method. Through tests with a shear-type test frame, the higher performance of the proposed static-dynamic damage detection approach, which can detect both single and multiple damage locations and the degree of the damage, is demonstrated. For structures with multiple damage, the combined approach is more effective than either the static or dynamic method. The proposed EMD method and static-dynamic damage detection method offer improved modal identification and damage detection, respectively, in structures.

  12. Combining Qualitative and Quantitative Approaches: Some Arguments for Mixed Methods Research

    Science.gov (United States)

    Lund, Thorleif

    2012-01-01

    One purpose of the present paper is to elaborate 4 general advantages of the mixed methods approach. Another purpose is to propose a 5-phase evaluation design, and to demonstrate its usefulness for mixed methods research. The account is limited to research on groups in need of treatment, i.e., vulnerable groups, and the advantages of mixed methods…

  13. Liquid-phase microextraction approaches combined with atomic detection: A critical review

    International Nuclear Information System (INIS)

    Pena-Pereira, Francisco; Lavilla, Isela; Bendicho, Carlos

    2010-01-01

    Liquid-phase microextraction (LPME) displays unique characteristics such as excellent preconcentration capability, simplicity, low cost, sample cleanup and integration of steps. Even though LPME approaches have the potential to be combined with almost every analytical technique, their use in combination with atomic detection techniques has not been exploited until recently. A comprehensive review dealing with the applications of liquid-phase microextraction combined with atomic detection techniques is presented. Theoretical features, possible strategies for these combinations as well as the effect of key experimental parameters influencing method development are addressed. Finally, a critical comparison of the different LPME approaches in terms of enrichment factors achieved, extraction efficiency, precision, selectivity and simplicity of operation is provided.

  14. Transbasal versus endoscopic endonasal versus combined approaches for olfactory groove meningiomas: importance of approach selection.

    Science.gov (United States)

    Liu, James K; Silva, Nicole A; Sevak, Ilesha A; Eloy, Jean Anderson

    2018-04-01

    OBJECTIVE There has been much debate regarding the optimal surgical approach for resecting olfactory groove meningiomas (OGMs). In this paper, the authors analyzed the factors involved in approach selection and reviewed the surgical outcomes in a series of OGMs. METHODS A retrospective review of 28 consecutive OGMs from a prospective database was conducted. Each tumor was treated via one of 3 approaches: transbasal approach (n = 15), pure endoscopic endonasal approach (EEA; n = 5), and combined (endoscope-assisted) transbasal-EEA (n = 8). RESULTS The mean tumor volume was greatest in the transbasal (92.02 cm 3 ) and combined (101.15 cm 3 ) groups. Both groups had significant lateral dural extension over the orbits (transbasal 73.3%, p 95%) was achieved in 20% of transbasal and 37.5% of combined cases, all due to tumor adherence to the critical neurovascular structures. The rate of CSF leakage was 0% in the transbasal and combined groups, and there was 1 leak in the EEA group (20%), resulting in an overall CSF leakage rate of 3.6%. Olfaction was preserved in 66.7% in the transbasal group. There was no significant difference in length of stay or 30-day readmission rate between the 3 groups. The mean modified Rankin Scale score was 0.79 after the transbasal approach, 2.0 after EEA, and 2.4 after the combined approach (p = 0.0604). The mean follow-up was 14.5 months (range 1-76 months). CONCLUSIONS The transbasal approach provided the best clinical outcomes with the lowest rate of complications for large tumors (> 40 mm) and for smaller tumors (OGMs invading the sinonasal cavity. Careful patient selection using an individualized, tailored strategy is important to optimize surgical outcomes.

  15. A Combined Social Action, Mixed Methods Approach to Vocational Guidance Efficacy Research

    Science.gov (United States)

    Perry, Justin C.

    2009-01-01

    This article proposes a social action, mixed methods approach to verifying the efficacy of vocational guidance programs. Research strategies are discussed in the context of how the processes and purposes of efficacy research have been conceptualized and studied in vocational psychology. Examples of how to implement this approach in future efficacy…

  16. A holistic method to assess building energy efficiency combining D-S theory and the evidential reasoning approach

    International Nuclear Information System (INIS)

    Yao Runming; Yang Yulan; Li Baizhan

    2012-01-01

    The assessment of building energy efficiency is one of the most effective measures for reducing building energy consumption. This paper proposes a holistic method (HMEEB) for assessing and certifying energy efficiency of buildings based on the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach. HMEEB has three main features: (i) it provides both a method to assess and certify building energy efficiency, and exists as an analytical tool to identify improvement opportunities; (ii) it combines a wealth of information on building energy efficiency assessment, including identification of indicators and a weighting mechanism; and (iii) it provides a method to identify and deal with inherent uncertainties within the assessment procedure. This paper demonstrates the robustness, flexibility and effectiveness of the proposed method, using two examples to assess the energy efficiency of two residential buildings, both located in the ‘Hot Summer and Cold Winter’ zone in China. The proposed certification method provides detailed recommendations for policymakers in the context of carbon emission reduction targets and promoting energy efficiency in the built environment. The method is transferable to other countries and regions, using an indicator weighting system to modify local climatic, economic and social factors. - Highlights: ► Assessing energy efficiency of buildings holistically; ► Applying the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach; ► Involving large information and uncertainties in the energy efficiency decision-making process. ► rigorous measures for policymakers to meet carbon emission reduction targets.

  17. Non-coding RNA detection methods combined to improve usability, reproducibility and precision

    Directory of Open Access Journals (Sweden)

    Kreikemeyer Bernd

    2010-09-01

    Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.

  18. Group decision-making approach for flood vulnerability identification using the fuzzy VIKOR method

    Science.gov (United States)

    Lee, G.; Jun, K. S.; Chung, E.-S.

    2015-04-01

    This study proposes an improved group decision making (GDM) framework that combines the VIKOR method with data fuzzification to quantify the spatial flood vulnerability including multiple criteria. In general, GDM method is an effective tool for formulating a compromise solution that involves various decision makers since various stakeholders may have different perspectives on their flood risk/vulnerability management responses. The GDM approach is designed to achieve consensus building that reflects the viewpoints of each participant. The fuzzy VIKOR method was developed to solve multi-criteria decision making (MCDM) problems with conflicting and noncommensurable criteria. This comprising method can be used to obtain a nearly ideal solution according to all established criteria. This approach effectively can propose some compromising decisions by combining the GDM method and fuzzy VIKOR method. The spatial flood vulnerability of the southern Han River using the GDM approach combined with the fuzzy VIKOR method was compared with the spatial flood vulnerability using general MCDM methods, such as the fuzzy TOPSIS and classical GDM methods (i.e., Borda, Condorcet, and Copeland). As a result, the proposed fuzzy GDM approach can reduce the uncertainty in the data confidence and weight derivation techniques. Thus, the combination of the GDM approach with the fuzzy VIKOR method can provide robust prioritization because it actively reflects the opinions of various groups and considers uncertainty in the input data.

  19. Combining a survey approach and energy and indoor environment auditing in historic buildings

    DEFF Research Database (Denmark)

    Rohdin, Patrik; Dalewski, Mariusz; Moshfegh, Bahram

    2016-01-01

    Purpose – This paper presents an approach where a survey study is combined with energy and indoor environment auditing in the built environment. The combination of methods presented in this paper is one way to obtain a wider perspective on the indoor environment and energy use and also let...... this research project. Design/methodology/approach – A combination of energy and indoor environment auditing and standardized occupant surveys. Findings – The main findings in the paper are related to the good agreement between results from standardized occupant surveys and physical measurements...

  20. Combined endoscopic approaches to the cardiac sphincter achalasia treatment

    Directory of Open Access Journals (Sweden)

    V. N. Klimenko

    2015-12-01

    Full Text Available Aim. To assess combined endoscopic approaches to the cardiac sphincter achalasia treatment. Results. There are preliminary results of treatment and methods of carrying out of combined endoscopic pneumocardiodilatation and injections of botulotoxin type A ‘Disport’ at achalasia cardia are described in the article. Aethio-pathogenetic aspects in the development of achalasia cardia, action of botulotoxin type A and balloon pneumocardiodilatation of the esophagus, were described. And modern roentgen-endoscopic classification of achalasia cardia was given. Prognostic estimation scale of possibility to implement further combined endoscopic or surgical treatment is defined and is being in subsequent working out. Conclusion. Described clinical cases most brightly demonstrate variety of clinical achalasia cardia manifestations and also determine of the earlier display of surgical treatment.

  1. Innovative spectrophotometric methods for simultaneous estimation of the novel two-drug combination: Sacubitril/Valsartan through two manipulation approaches and a comparative statistical study

    Science.gov (United States)

    Eissa, Maya S.; Abou Al Alamein, Amal M.

    2018-03-01

    Different innovative spectrophotometric methods were introduced for the first time for simultaneous quantification of sacubitril/valsartan in their binary mixture and in their combined dosage form without prior separation through two manipulation approaches. These approaches were developed and based either on two wavelength selection in zero-order absorption spectra namely; dual wavelength method (DWL) at 226 nm and 275 nm for valsartan, induced dual wavelength method (IDW) at 226 nm and 254 nm for sacubitril and advanced absorbance subtraction (AAS) based on their iso-absorptive point at 246 nm (λiso) and 261 nm (sacubitril shows equal absorbance values at the two selected wavelengths) or on ratio spectra using their normalized spectra namely; ratio difference spectrophotometric method (RD) at 225 nm and 264 nm for both of them in their ratio spectra, first derivative of ratio spectra (DR1) at 232 nm for valsartan and 239 nm for sacubitril and mean centering of ratio spectra (MCR) at 260 nm for both of them. Both sacubitril and valsartan showed linearity upon application of these methods in the range of 2.5-25.0 μg/mL. The developed spectrophotmetric methods were successfully applied to the analysis of their combined tablet dosage form ENTRESTO™. The adopted spectrophotometric methods were also validated according to ICH guidelines. The results obtained from the proposed methods were statistically compared to a reported HPLC method using Student t-test, F-test and a comparative study was also developed with one-way ANOVA, showing no statistical difference in accordance to precision and accuracy.

  2. Combined time-varying forecast based on the proper scoring approach for wind power generation

    DEFF Research Database (Denmark)

    Chen, Xingying; Jiang, Yu; Yu, Kun

    2017-01-01

    Compared with traditional point forecasts, combined forecast have been proposed as an effective method to provide more accurate forecasts than individual model. However, the literature and research focus on wind-power combined forecasts are relatively limited. Here, based on forecasting error...... distribution, a proper scoring approach is applied to combine plausible models to form an overall time-varying model for the next day forecasts, rather than weights-based combination. To validate the effectiveness of the proposed method, real data of 3 years were used for testing. Simulation results...... demonstrate that the proposed method improves the accuracy of overall forecasts, even compared with a numerical weather prediction....

  3. COMPARISONS BETWEEN AND COMBINATIONS OF DIFFERENT APPROACHES TO ACCELERATE ENGINEERING PROJECTS

    Directory of Open Access Journals (Sweden)

    H. Steyn

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this article, traditional project management methods such as PERT and CPM, as well as fast-tracking and systems approaches, viz. concurrent engineering and critical chain, are reviewed with specific reference to their contribution to reducing the duration of the execution phase of engineering projects. Each of these techniques has some role to play in the acceleration of project execution. Combinations of approaches are evaluated by considering the potential of sets consisting of two different approaches each. While PERT and CPM approaches have been combined for many years in a technique called PERT/CPM, new combinations of approaches are discussed. Certain assumptions inherent to PERT and often wrong are not made by the critical chain approach.

    AFRIKAANSE OPSOMMING: In hierdie artikel word tradisionele projekbestuurbenaderings soos PERT en CPM asook projekversnelling en stelselbenaderings, naamlik gelyktydige ingenieurswese, en kritiekeketting-ondersoek met betrekking tot die bydrae wat elk tot die versnelling van die uitvoeringsfase van ingenieursprojekte kan lewer. Elk van hierdie benaderings kan ‘n spesifieke bydrae tot die versnelling van projekte lewer. Kombinasies, elk bestaande uit twee verskillende benaderings, word geëvalueer. Terwyl PERT en CPM reeds baie jare lank in kombinasie gebruik word, word nuwe kombinasies ook hier bespreek. Sekere aannames inherent aan die PERT-benadering is dikwels foutief. Hierdie aannames word nie deur die kritieke-ketting-benadering gemaak nie.

  4. Relative conservatisms of combination methods used in response spectrum analyses of nuclear piping systems

    International Nuclear Information System (INIS)

    Gupta, S.; Kustu, O.; Jhaveri, D.P.; Blume, J.A.

    1983-01-01

    The paper presents the conclusions of a comprehensive study that investigated the relative conservatisms represented by various combination techniques. Two approaches were taken for the study, producing mutually consistent results. In the first, 20 representative nuclear piping systems were systematically analyzed using the response spectrum method. The total response was obtained using nine different combination methods. One procedure, using the SRSS method for combining spatial components of response and the 10% method for combining the responses of different modes (which is currently acceptable to the U.S. NRC), was the standard for comparison. Responses computed by the other methods were normalized to this standard method. These response ratios were then used to develop cumulative frequency-distribution curves, which were used to establish the relative conservatism of the methods in a probabilistic sense. In the second approach, 30 single-degree-of-freedom (SDOF) systems that represent different modes of hypothetical piping systems and have natural frequencies varying from 1 Hz to 30 Hz, were analyzed for 276 sets of three-component recorded ground motion. A set of hypothetical systems assuming a variety of modes and frequency ranges was developed. The responses of these systems were computed from the responses of the SDOF systems by combining the spatial response components by algebraic summation and the individual mode responses by the Navy method, or combining both spatial and modal response components using the SRSS method. Probability density functions and cumulative distribution functions were developed for the ratio of the responses obtained by both methods. (orig./HP)

  5. Life prediction methods for the combined creep-fatigue endurance

    International Nuclear Information System (INIS)

    Wareing, J.; Lloyd, G.J.

    1980-09-01

    The basis and current status of development of the various approaches to the prediction of the combined creep-fatigue endurance are reviewed. It is concluded that an inadequate materials data base makes it difficult to draw sensible conclusions about the prediction capabilities of each of the available methods. Correlation with data for stainless steel 304 and 316 is presented. (U.K.)

  6. A method and tool for combining differential or inclusive measurements obtained with simultaneously constrained uncertainties

    Science.gov (United States)

    Kieseler, Jan

    2017-11-01

    A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections.

  7. A method and tool for combining differential or inclusive measurements obtained with simultaneously constrained uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Kieseler, Jan [CERN, Geneva (Switzerland)

    2017-11-15

    A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections. (orig.)

  8. Combined approach for gynecomastia

    Directory of Open Access Journals (Sweden)

    El-Sabbagh, Ahmed Hassan

    2016-02-01

    Full Text Available Background: Gynecomastia is a deformity of male chest. Treatment of gynecomastia varied from direct surgical excision to other techniques (mainly liposuction to a combination of both. Skin excision is done according to the grade. In this study, experience of using liposuction adjuvant to surgical excision was described. Patients and methods: Between September 2012 and April 2015, a total of 14 patients were treated with liposuction and surgical excision through a periareolar incision. Preoperative evaluation was done in all cases to exclude any underlying cause of gynecomastia. Results: All fourteen patients were treated bilaterally (28 breast tissues. Their ages ranged between 13 and 33 years. Two patients were classified as grade I, and four as grade IIa, IIb or III, respectively. The first showed seroma. Partial superficial epidermolysis of areola occurred in 2 cases. Superficial infection of incision occurred in one case and was treated conservatively. Conclusion: All grades of gynecomastia were managed by the same approach. Skin excision was added to a patient that had severe skin excess with limited activity and bad skin complexion. No cases required another setting or asked for 2 opinion.

  9. Combining morphometric evidence from multiple registration methods using dempster-shafer theory

    Science.gov (United States)

    Rajagopalan, Vidya; Wyatt, Christopher

    2010-03-01

    In tensor-based morphometry (TBM) group-wise differences in brain structure are measured using high degreeof- freedom registration and some form of statistical test. However, it is known that TBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation is it difficult to determine a best registration method for TBM. The use of statistical tests is also problematic given the corrections required for multiple testing and the notorius difficulty selecting and intepreting signigance values. This paper presents an approach to address both of these issues by combining multiple registration methods using Dempster-Shafer Evidence theory to produce belief maps of categorical changes between groups. This approach is applied to the comparison brain morphometry in aging, a typical application of TBM, using the determinant of the Jacobian as a measure of volume change. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing.

  10. Experiential Approach to Teaching Statistics and Research Methods ...

    African Journals Online (AJOL)

    Statistics and research methods are among the more demanding topics for students of education to master at both the undergraduate and postgraduate levels. It is our conviction that teaching these topics should be combined with real practical experiences. We discuss an experiential teaching/ learning approach that ...

  11. A fast combination method in DSmT and its application to recommender system.

    Directory of Open Access Journals (Sweden)

    Yilin Dong

    Full Text Available In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian basic belief assignments (BBAs to subjective probabilities (called Bayesian BBAs. This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems, or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC, to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users' soft preferences in recommender systems (RSs. Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6 is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC method and comparable in computational time.

  12. A fast combination method in DSmT and its application to recommender system.

    Science.gov (United States)

    Dong, Yilin; Li, Xinde; Liu, Yihai

    2018-01-01

    In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian) basic belief assignments (BBAs) to subjective probabilities (called Bayesian BBAs). This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems), or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC), to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening) of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users' soft preferences in recommender systems (RSs). Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6) is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC) method and comparable in computational time.

  13. The failure combination method: presentation, application to a simple collection of systems

    International Nuclear Information System (INIS)

    Llory, M.; Villemeur, A.

    1981-11-01

    The main advantages of this particular method for analyzing the reliability and safety of systems, the method of failure combinations, are presented. This is an inductive method of analysis; it makes it possible to pursue the Failure Modes and Effect Analysis (FMEA) until overall failures are obtained. In this manner, through an inductive approach all the combinations of failure modes leading to abnormal functioning of systems are obtained. It also makes it possible to carry out the overall study of complex systems in interaction and the systematic inventory of abnormal functioning of these systems, as from the failure modes of the components and their combinations. It can be used as from the design stages of systems and is an excellent dialogue tool between the various specialists concerned in problems of safety, operation and reliability [fr

  14. Paper Prototyping: The Surplus Merit of a Multi-Method Approach

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2015-07-01

    Full Text Available This article describes a multi-method approach for usability testing. The approach combines paper prototyping and think-aloud with two supplemental methods: advanced scribbling and a handicraft task. The method of advanced scribbling instructs the participants to use different colors for marking important, unnecessary and confusing elements in a paper prototype. In the handicraft task the participants have to tinker a paper prototype of their wish version. Both methods deliver additional information on the needs and expectations of the potential users and provide helpful indicators for clarifying complex or contradictory findings. The multi-method approach and its surplus benefit are illustrated by a pilot study on the redesign of the homepage of a library 2.0. The findings provide positive evidence for the applicability of the advanced scribbling and the handicraft task as well as for the surplus merit of the multi-method approach. The article closes with a discussion and outlook. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs150379

  15. Prioritizing the refactoring need for critical component using combined approach

    Directory of Open Access Journals (Sweden)

    Rajni Sehgal

    2018-10-01

    Full Text Available One of the most promising strategies that will smooth out the maintainability issues of the software is refactoring. Due to lack of proper design approach, the code often inherits some bad smells which may lead to improper functioning of the code, especially when it is subject to change and requires some maintenance. A lot of studies have been performed to optimize the refactoring strategy which is also a very expensive process. In this paper, a component based system is considered, and a Fuzzy Multi Criteria Decision Making (FMCDM model is proposed by combining subjective and objective weights to rank the components as per their urgency of refactoring. Jdeodorant tool is used to detect the code smells from the individual components of a software system. The objective method uses the Entropy approach to rank the component having the code smell. The subjective method uses the Fuzzy TOPSIS approach based on decision makers’ judgement, to identify the critically and dependency of these code smells on the overall software. The suggested approach is implemented on component-based software having 15 components. The constitute components are ranked based on refactoring requirements.

  16. RELAP5 simulation of surge line break accident using combined and best estimate plus uncertainty approaches

    International Nuclear Information System (INIS)

    Kristof, Marian; Kliment, Tomas; Petruzzi, Alessandro; Lipka, Jozef

    2009-01-01

    Licensing calculations in a majority of countries worldwide still rely on the application of combined approach using best estimate computer code without evaluation of the code models uncertainty and conservative assumptions on initial and boundary, availability of systems and components and additional conservative assumptions. However best estimate plus uncertainty (BEPU) approach representing the state-of-the-art in the area of safety analysis has a clear potential to replace currently used combined approach. There are several applications of BEPU approach in the area of licensing calculations, but some questions are discussed, namely from the regulatory point of view. In order to find a proper solution to these questions and to support the BEPU approach to become a standard approach for licensing calculations, a broad comparison of both approaches for various transients is necessary. Results of one of such comparisons on the example of the VVER-440/213 NPP pressurizer surge line break event are described in this paper. A Kv-scaled simulation based on PH4-SLB experiment from PMK-2 integral test facility applying its volume and power scaling factor is performed for qualitative assessment of the RELAP5 computer code calculation using the VVER-440/213 plant model. Existing hardware differences are identified and explained. The CIAU method is adopted for performing the uncertainty evaluation. Results using combined and BEPU approaches are in agreement with the experimental values in PMK-2 facility. Only minimal difference between combined and BEPU approached has been observed in the evaluation of the safety margins for the peak cladding temperature. Benefits of the CIAU uncertainty method are highlighted.

  17. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    Science.gov (United States)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  18. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  19. A comprehensive high-resolution mass spectrometry approach for characterization of metabolites by combination of ambient ionization, chromatography and imaging methods.

    Science.gov (United States)

    Berisha, Arton; Dold, Sebastian; Guenther, Sabine; Desbenoit, Nicolas; Takats, Zoltan; Spengler, Bernhard; Römpp, Andreas

    2014-08-30

    An ideal method for bioanalytical applications would deliver spatially resolved quantitative information in real time and without sample preparation. In reality these requirements can typically not be met by a single analytical technique. Therefore, we combine different mass spectrometry approaches: chromatographic separation, ambient ionization and imaging techniques, in order to obtain comprehensive information about metabolites in complex biological samples. Samples were analyzed by laser desorption followed by electrospray ionization (LD-ESI) as an ambient ionization technique, by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging for spatial distribution analysis and by high-performance liquid chromatography/electrospray ionization mass spectrometry (HPLC/ESI-MS) for quantitation and validation of compound identification. All MS data were acquired with high mass resolution and accurate mass (using orbital trapping and ion cyclotron resonance mass spectrometers). Grape berries were analyzed and evaluated in detail, whereas wheat seeds and mouse brain tissue were analyzed in proof-of-concept experiments. In situ measurements by LD-ESI without any sample preparation allowed for fast screening of plant metabolites on the grape surface. MALDI imaging of grape cross sections at 20 µm pixel size revealed the detailed distribution of metabolites which were in accordance with their biological function. HPLC/ESI-MS was used to quantify 13 anthocyanin species as well as to separate and identify isomeric compounds. A total of 41 metabolites (amino acids, carbohydrates, anthocyanins) were identified with all three approaches. Mass accuracy for all MS measurements was better than 2 ppm (root mean square error). The combined approach provides fast screening capabilities, spatial distribution information and the possibility to quantify metabolites. Accurate mass measurements proved to be critical in order to reliably combine data from different MS

  20. arXiv A method and tool for combining differential or inclusive measurements obtained with simultaneously constrained uncertainties

    CERN Document Server

    Kieseler, Jan

    2017-11-22

    A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A d...

  1. Characterisation of radioactive contaminated materials by combined radiometric and spectrometric methods

    International Nuclear Information System (INIS)

    Dulama, C.; Toma, A.; Dobrin, R.; Ciocîrlan, C.; Stoica, S.; Valeca, M.; Popescu, I. I.

    2013-01-01

    In the present paper, a combined analytical methodology is described, for characterization of radioactive contaminated materials. The subject of testing activities was a set of solutions provided by the Cernavoda NPP, which are originating from processes of radiological survey of workplaces in the plant. In the introduction section, a theoretical approach was given to the origin and nature of main radionuclides occurring in the primary cooling system of the nuclear power plant, with the aim to establish selection criteria and performance requirements for the analytical methods to be used in the development of the characterization methodology. A combination of radiometric and spectrometric methods was selected, based on gross beta counting, high resolution gamma-ray spectrometry and liquid scintillation counting. (authors)

  2. A Preliminary Report on Combined Penoscrotal and Perineal Approach for Placement of Penile Prosthesis with Corporal Fibrosis

    Directory of Open Access Journals (Sweden)

    John P. Brusky

    2008-01-01

    Full Text Available Purpose. This paper aims at describing the combined penoscrotal and perineal approach for placement of penile prosthesis in cases of severe corporal fibrosis and scarring. Materials and methods. Three patients with extensive corporal fibrosis underwent penile prosthesis placement via combined penoscrotal and perineal approach from 1997 to 2006. Follow-up ranged from 15 to 129 months. Results. All patients underwent successful implantation of semirigid penile prosthesis. There were no short- or long-term complications. Conclusions. Results on combined penoscrotal and perineal approach to penile prosthetic surgery in this preliminary series of patients suggest that it is a safe technique and increases the chance of successful outcome in the surgical management of severe corporal fibrosis.

  3. Combined Yamamoto approach for simultaneous estimation of adsorption isotherm and kinetic parameters in ion-exchange chromatography.

    Science.gov (United States)

    Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand

    2015-09-25

    Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Alternate modal combination methods in response spectrum analysis

    International Nuclear Information System (INIS)

    Bezler, P.; Curreri, J.R.; Wang, Y.K.; Gupta, A.K.

    1990-10-01

    In piping analyses using the response spectrum method Square Root of the Sum of the Squares (SRSS) with clustering between closely spaced modes is the combination procedure most commonly used to combine between the modal response components. This procedure is simple to apply and normally yields conservative estimates of the time history results. The purpose of this study is to investigate alternate methods to combine between the modal response components. These methods are mathematically based to properly account for the combination between rigid and flexible modal responses as well as closely spaced modes. The methods are those advanced by Gupta, Hadjian and Lindely-Yow to address rigid response modes and the Double Sum Combination (DSC) method and the Complete Quadratic Combination (CQC) method to account for closely spaced modes. A direct comparison between these methods as well as the SRSS procedure is made by using them to predict the response of six piping systems. The results provided by each method are compared to the corresponding time history estimates of results as well as to each other. The degree of conservatism associated with each method is characterized. 19 refs., 16 figs., 10 tabs

  5. The index-flood and the GRADEX methods combination for flood frequency analysis.

    Science.gov (United States)

    Fuentes, Diana; Di Baldassarre, Giuliano; Quesada, Beatriz; Xu, Chong-Yu; Halldin, Sven; Beven, Keith

    2017-04-01

    Flood frequency analysis is used in many applications, including flood risk management, design of hydraulic structures, and urban planning. However, such analysis requires of long series of observed discharge data which are often not available in many basins around the world. In this study, we tested the usefulness of combining regional discharge and local precipitation data to estimate the event flood volume frequency curve for 63 catchments in Mexico, Central America and the Caribbean. This was achieved by combining two existing flood frequency analysis methods, the regionalization index-flood approach with the GRADEX method. For up to 10-years return period, similar shape of the scaled flood frequency curve for catchments with similar flood behaviour was assumed from the index-flood approach. For return periods larger than 10-years the probability distribution of rainfall and discharge volumes were assumed to be asymptotically and exponential-type functions with the same scale parameter from the GRADEX method. Results showed that if the mean annual flood (MAF), used as index-flood, is known, the index-flood approach performed well for up to 10 years return periods, resulting in 25% mean relative error in prediction. For larger return periods the prediction capability decreased but could be improved by the use of the GRADEX method. As the MAF is unknown at ungauged and short-period measured basins, we tested predicting the MAF using catchments climate-physical characteristics, and discharge statistics, the latter when observations were available for only 8 years. Only the use of discharge statistics resulted in acceptable predictions.

  6. Combining Different Privacy-Preserving Record Linkage Methods for Hospital Admission Data.

    Science.gov (United States)

    Stausberg, Jürgen; Waldenburger, Andreas; Borgs, Christian; Schnell, Rainer

    2017-01-01

    Record linkage (RL) is the process of identifying pairs of records that correspond to the same entity, for example the same patient. The basic approach assigns to each pair of records a similarity weight, and then determines a certain threshold, above which the two records are considered to be a match. Three different RL methods were applied under privacy-preserving conditions on hospital admission data: deterministic RL (DRL), probabilistic RL (PRL), and Bloom filters. The patient characteristics like names were one-way encrypted (DRL, PRL) or transformed to a cryptographic longterm key (Bloom filters). Based on one year of hospital admissions, the data set was split randomly in 30 thousand new and 1,5 million known patients. With the combination of the three RL-methods, a positive predictive value of 83 % (95 %-confidence interval 65 %-94 %) was attained. Thus, the application of the presented combination of RL-methods seem to be suited for other applications of population-based research.

  7. Alternate modal combination methods in response spectrum analysis

    International Nuclear Information System (INIS)

    Wang, Y.K.; Bezler, P.

    1989-01-01

    In piping analyses using the response spectrum method Square Root of the Sum of the Squares (SRSS) with clustering between closely spaced modes is the combination procedure most commonly used to combine between the modal response components. This procedure is simple to apply and normally yields conservative estimates of the time history results. The purpose of this study is to investigate alternate methods to combine between the modal response components. These methods are mathematically based to properly account for the combination between rigid and flexible modal responses as well as closely spaced modes. The methods are those advanced by Gupta, Hadjian and Lindley-Yow to address rigid response modes and the Double Sum Combination (DSC) method and the Complete Quadratic Combination (CQC) method to account for closely spaced modes. A direct comparison between these methods as well as the SRSS procedure is made by using them to predict the response of six piping systems. For two piping systems thirty-three earthquake records were considered to account for the impact of variations in the characteristics of the excitation. The results provided by each method are compared to the corresponding time history estimates of results as well as to each other. The degree of conservatism associated with each method is characterized. 7 refs., 4 figs., 2 tabs

  8. A Combined Fuzzy-AHP and Fuzzy-GRA Methodology for Hydrogen Energy Storage Method Selection in Turkey

    Directory of Open Access Journals (Sweden)

    Aytac Yildiz

    2013-06-01

    Full Text Available In this paper, we aim to select the most appropriate Hydrogen Energy Storage (HES method for Turkey from among the alternatives of tank, metal hydride and chemical storage, which are determined based on expert opinions and literature review. Thus, we propose a Buckley extension based fuzzy Analytical Hierarchical Process (Fuzzy-AHP and linear normalization based fuzzy Grey Relational Analysis (Fuzzy-GRA combined Multi Criteria Decision Making (MCDM methodology. This combined approach can be applied to a complex decision process, which often makes sense with subjective data or vague information; and used to solve to solve HES selection problem with different defuzzification methods. The proposed approach is unique both in the HES literature and the MCDM literature.

  9. Statistical methods of combining information: Applications to sensor data fusion

    Energy Technology Data Exchange (ETDEWEB)

    Burr, T.

    1996-12-31

    This paper reviews some statistical approaches to combining information from multiple sources. Promising new approaches will be described, and potential applications to combining not-so-different data sources such as sensor data will be discussed. Experiences with one real data set are described.

  10. Combined computational and experimental approach to improve the assessment of mitral regurgitation by echocardiography.

    Science.gov (United States)

    Sonntag, Simon J; Li, Wei; Becker, Michael; Kaestner, Wiebke; Büsen, Martin R; Marx, Nikolaus; Merhof, Dorit; Steinseifer, Ulrich

    2014-05-01

    Mitral regurgitation (MR) is one of the most frequent valvular heart diseases. To assess MR severity, color Doppler imaging (CDI) is the clinical standard. However, inadequate reliability, poor reproducibility and heavy user-dependence are known limitations. A novel approach combining computational and experimental methods is currently under development aiming to improve the quantification. A flow chamber for a circulatory flow loop was developed. Three different orifices were used to mimic variations of MR. The flow field was recorded simultaneously by a 2D Doppler ultrasound transducer and Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) simulations were conducted using the same geometry and boundary conditions. The resulting computed velocity field was used to simulate synthetic Doppler signals. Comparison between PIV and CFD shows a high level of agreement. The simulated CDI exhibits the same characteristics as the recorded color Doppler images. The feasibility of the proposed combination of experimental and computational methods for the investigation of MR is shown and the numerical methods are successfully validated against the experiments. Furthermore, it is discussed how the approach can be used in the long run as a platform to improve the assessment of MR quantification.

  11. Combining genomic and proteomic approaches for epigenetics research

    Science.gov (United States)

    Han, Yumiao; Garcia, Benjamin A

    2014-01-01

    Epigenetics is the study of changes in gene expression or cellular phenotype that do not change the DNA sequence. In this review, current methods, both genomic and proteomic, associated with epigenetics research are discussed. Among them, chromatin immunoprecipitation (ChIP) followed by sequencing and other ChIP-based techniques are powerful techniques for genome-wide profiling of DNA-binding proteins, histone post-translational modifications or nucleosome positions. However, mass spectrometry-based proteomics is increasingly being used in functional biological studies and has proved to be an indispensable tool to characterize histone modifications, as well as DNA–protein and protein–protein interactions. With the development of genomic and proteomic approaches, combination of ChIP and mass spectrometry has the potential to expand our knowledge of epigenetics research to a higher level. PMID:23895656

  12. Combination of Evidence with Different Weighting Factors: A Novel Probabilistic-Based Dissimilarity Measure Approach

    Directory of Open Access Journals (Sweden)

    Mengmeng Ma

    2015-01-01

    Full Text Available To solve the invalidation problem of Dempster-Shafer theory of evidence (DS with high conflict in multisensor data fusion, this paper presents a novel combination approach of conflict evidence with different weighting factors using a new probabilistic dissimilarity measure. Firstly, an improved probabilistic transformation function is proposed to map basic belief assignments (BBAs to probabilities. Then, a new dissimilarity measure integrating fuzzy nearness and introduced correlation coefficient is proposed to characterize not only the difference between basic belief functions (BBAs but also the divergence degree of the hypothesis that two BBAs support. Finally, the weighting factors used to reassign conflicts on BBAs are developed and Dempster’s rule is chosen to combine the discounted sources. Simple numerical examples are employed to demonstrate the merit of the proposed method. Through analysis and comparison of the results, the new combination approach can effectively solve the problem of conflict management with better convergence performance and robustness.

  13. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  14. Efficient free energy calculations by combining two complementary tempering sampling methods.

    Science.gov (United States)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  15. Approaches, tools and methods used for setting priorities in health research in the 21st century

    Science.gov (United States)

    Yoshida, Sachiyo

    2016-01-01

    Background Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. Methods To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001–2014. Results A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (priorities were set. A further 19% used a combination of expert panel interview and focus group discussion (“consultation process”) but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face–to–face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. Conclusion The number of priority setting exercises in health research published in PubMed–indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well–defined structure – such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix – it is likely that the Delphi method and non–replicable consultation processes will gradually be

  16. Forecasting telecommunication new service demand by analogy method and combined forecast

    Directory of Open Access Journals (Sweden)

    Lin Feng-Jenq

    2005-01-01

    Full Text Available In the modeling forecast field, we are usually faced with the more difficult problems of forecasting market demand for a new service or product. A new service or product is defined as that there is absence of historical data in this new market. We hardly use models to execute the forecasting work directly. In the Taiwan telecommunication industry, after liberalization in 1996, there are many new services opened continually. For optimal investment, it is necessary that the operators, who have been granted the concessions and licenses, forecast this new service within their planning process. Though there are some methods to solve or avoid this predicament, in this paper, we will propose one forecasting procedure that integrates the concept of analogy method and the idea of combined forecast to generate new service forecast. In view of the above, the first half of this paper describes the procedure of analogy method and the approach of combined forecast, and the second half provides the case of forecasting low-tier phone demand in Taiwan to illustrate this procedure's feasibility.

  17. The economic burden of diabetes to French national health insurance: a new cost-of-illness method based on a combined medicalized and incremental approach.

    Science.gov (United States)

    de Lagasnerie, Grégoire; Aguadé, Anne-Sophie; Denis, Pierre; Fagot-Campagna, Anne; Gastaldi-Menager, Christelle

    2018-03-01

    A better understanding of the economic burden of diabetes constitutes a major public health challenge in order to design new ways to curb diabetes health care expenditure. The aim of this study was to develop a new cost-of-illness method in order to assess the specific and nonspecific costs of diabetes from a public payer perspective. Using medical and administrative data from the major French national health insurance system covering about 59 million individuals in 2012, we identified people with diabetes and then estimated the economic burden of diabetes. Various methods were used: (a) global cost of patients with diabetes, (b) cost of treatment directly related to diabetes (i.e., 'medicalized approach'), (c) incremental regression-based approach, (d) incremental matched-control approach, and (e) a novel combination of the 'medicalized approach' and the 'incremental matched-control' approach. We identified 3 million individuals with diabetes (5% of the population). The total expenditure of this population amounted to €19 billion, representing 15% of total expenditure reimbursed to the entire population. Of the total expenditure, €10 billion (52%) was considered to be attributable to diabetes care: €2.3 billion (23% of €10 billion) was directly attributable, and €7.7 billion was attributable to additional reimbursed expenditure indirectly related to diabetes (77%). Inpatient care represented the major part of the expenditure attributable to diabetes care (22%) together with drugs (20%) and medical auxiliaries (15%). Antidiabetic drugs represented an expenditure of about €1.1 billion, accounting for 49% of all diabetes-specific expenditure. This study shows the economic impact of the assumption concerning definition of costs on evaluation of the economic burden of diabetes. The proposed new cost-of-illness method provides specific insight for policy-makers to enhance diabetes management and assess the opportunity costs of diabetes complications

  18. Approaches, tools and methods used for setting priorities in health research in the 21(st) century.

    Science.gov (United States)

    Yoshida, Sachiyo

    2016-06-01

    Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001-2014. A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (priorities were set. A further 19% used a combination of expert panel interview and focus group discussion ("consultation process") but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face-to-face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. The number of priority setting exercises in health research published in PubMed-indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well-defined structure - such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix - it is likely that the Delphi method and non-replicable consultation processes will gradually be replaced by these emerging tools, which offer more

  19. Application of Combination High-Throughput Phenotypic Screening and Target Identification Methods for the Discovery of Natural Product-Based Combination Drugs.

    Science.gov (United States)

    Isgut, Monica; Rao, Mukkavilli; Yang, Chunhua; Subrahmanyam, Vangala; Rida, Padmashree C G; Aneja, Ritu

    2018-03-01

    Modern drug discovery efforts have had mediocre success rates with increasing developmental costs, and this has encouraged pharmaceutical scientists to seek innovative approaches. Recently with the rise of the fields of systems biology and metabolomics, network pharmacology (NP) has begun to emerge as a new paradigm in drug discovery, with a focus on multiple targets and drug combinations for treating disease. Studies on the benefits of drug combinations lay the groundwork for a renewed focus on natural products in drug discovery. Natural products consist of a multitude of constituents that can act on a variety of targets in the body to induce pharmacodynamic responses that may together culminate in an additive or synergistic therapeutic effect. Although natural products cannot be patented, they can be used as starting points in the discovery of potent combination therapeutics. The optimal mix of bioactive ingredients in natural products can be determined via phenotypic screening. The targets and molecular mechanisms of action of these active ingredients can then be determined using chemical proteomics, and by implementing a reverse pharmacokinetics approach. This review article provides evidence supporting the potential benefits of natural product-based combination drugs, and summarizes drug discovery methods that can be applied to this class of drugs. © 2017 Wiley Periodicals, Inc.

  20. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    Science.gov (United States)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  1. A simple method for combining genetic mapping data from multiple crosses and experimental designs.

    Directory of Open Access Journals (Sweden)

    Jeremy L Peirce

    Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

  2. The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination

    Directory of Open Access Journals (Sweden)

    Liangping Wu

    2014-08-01

    Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.

  3. The use of Triangulation in Social Sciences Research : Can qualitative and quantitative methods be combined?

    Directory of Open Access Journals (Sweden)

    Ashatu Hussein

    2015-03-01

    Full Text Available This article refers to a study in Tanzania on fringe benefits or welfare via the work contract1 where we will work both quantitatively and qualitatively. My focus is on the vital issue of combining methods or methodologies. There has been mixed views on the uses of triangulation in researches. Some authors argue that triangulation is just for increasing the wider and deep understanding of the study phenomenon, while others have argued that triangulation is actually used to increase the study accuracy, in this case triangulation is one of the validity measures. Triangulation is defined as the use of multiple methods mainly qualitative and quantitative methods in studying the same phenomenon for the purpose of increasing study credibility. This implies that triangulation is the combination of two or more methodological approaches, theoretical perspectives, data sources, investigators and analysis methods to study the same phenomenon.However, using both qualitative and quantitative paradigms in the same study has resulted into debate from some researchers arguing that the two paradigms differ epistemologically and ontologically. Nevertheless, both paradigms are designed towards understanding about a particular subject area of interest and both of them have strengths and weaknesses. Thus, when combined there is a great possibility of neutralizing the flaws of one method and strengthening the benefits of the other for the better research results. Thus, to reap the benefits of two paradigms and minimizing the drawbacks of each, the combination of the two approaches have been advocated in this article. The quality of our studies on welfare to combat poverty is crucial, and especially when we want our conclusions to matter in practice.

  4. Surgical treatment of traumatic cervical facet dislocation: anterior, posterior or combined approaches?

    Directory of Open Access Journals (Sweden)

    Catarina C. Lins

    Full Text Available ABSTRACT Surgical treatment is well accepted for patients with traumatic cervical facet joint dislocations (CFD, but there is uncertainty over which approach is better: anterior, posterior or combined. We performed a systematic literature review to evaluate the indications for anterior and posterior approaches in the management of CFD. Anterior approaches can restore cervical lordosis, and cause less postoperative pain and less wound problems. Posterior approaches are useful for direct reduction of locked facet joints and provide stronger fixation from a biomechanical point of view. Combined approaches can be used in more complex cases. Although both anterior and posterior approaches can be used interchangeably, there are some patients who may benefit from one of them over the other, as discussed in this review. Surgeons who treat cervical spine trauma should be able to perform both procedures as well as combined approaches to adequately manage CFD and improve patients’ final outcomes.

  5. The structure and assembly of surface layer proteins : a combined approach of in silico and experimental methods

    International Nuclear Information System (INIS)

    Horejs, C.

    2011-01-01

    Self-assembly of matter is one of nature's most sophisticated strategies to organize molecules on a large scale and to create order from disorder. Surface (S-)layer proteins self-assemble in a highly reproducible and robust fashion in order to form crystalline layers that completely cover and protect prokaryotic cells. Long conserved during evolution, S-layers constitute a unique model system to study the molecular mechanisms of functional self-assembly, while additionally, they provide a basic matrix for the specific construction of ordered nanostructures. Due to their intrinsic capabilities to self-assemble into two-dimensional crystals, the elucidation of the three-dimensional structure of single S-layer proteins demands an approach beyond conventional structure determination methods. In this work, computer simulations were combined with experimental techniques in order to study the structure and intra- and intermolecular potentials guiding the proteins to self-assemble into lattices with different symmetries. Molecular dynamics, Monte Carlo methods, small-angle X-ray scattering involving a new theoretical description, and AFM-based single-molecule force spectroscopy yield new insights into the three-dimensional structure of S-layer proteins, the location, type and distribution of amino acids in S-layer lattices, the molecular mechanisms behind the self-assembly process, the mechanical stability and adaptive structural conformations that S-layer proteins are able to establish. In silico studies - embedded in an adequate experimental and theoretical scaffold - offer the possibility to calculate structural and thermodynamic features of proteins, while this work demonstrates the growing impact of such theoretical techniques in the fascinating field of biophysics at the nano-scale. (author) [de

  6. Methods and statistics for combining motif match scores.

    Science.gov (United States)

    Bailey, T L; Gribskov, M

    1998-01-01

    Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.

  7. Mixing Methods in Organizational Ethics and Organizational Innovativeness Research : Three Approaches to Mixed Methods Analysis

    OpenAIRE

    Riivari, Elina

    2015-01-01

    This chapter discusses three categories of mixed methods analysis techniques: variableoriented, case-oriented, and process/experience-oriented. All three categories combine qualitative and quantitative approaches to research methodology. The major differences among the categories are the focus of the study, available analysis techniques and timely aspect of the study. In variable-oriented analysis, the study focus is relationships between the research phenomena. In case-oriente...

  8. On Combining Language Models: Oracle Approach

    National Research Council Canada - National Science Library

    Hacioglu, Kadri; Ward, Wayne

    2001-01-01

    In this paper, we address the of combining several language models (LMs). We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle...

  9. Combined Interhemispheric and Transsylvian Approach for Resection of Craniopharyngioma.

    Science.gov (United States)

    Inoue, Tomohiro; Ono, Hideaki; Tamura, Akira; Saito, Isamu

    2018-04-01

    We present a 37-year-old male case of cystic suprasellar huge craniopharyngioma, who presented with significant memory disturbance due to obstructive hydrocephalus. Combined interhemispheric and pterional approach was chosen to resect huge suprasellar tumor. Interhemispheric trans-lamina terminalis approach was quite effective to resect third ventricular tumor, while pterional approach was useful to dissect tumor out of basilar perforators and stalk. The link to the video can be found at: https://youtu.be/BoYIPa96kdo .

  10. A penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography.

    Science.gov (United States)

    Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.

  11. A Benders decomposition approach for a combined heat and power economic dispatch

    International Nuclear Information System (INIS)

    Abdolmohammadi, Hamid Reza; Kazemi, Ahad

    2013-01-01

    Highlights: • Benders decomposition algorithm to solve combined heat and power economic dispatch. • Decomposing the CHPED problem into master problem and subproblem. • Considering non-convex heat-power feasible region efficiently. • Solving 4 units and 5 units system with 2 and 3 co-generation units, respectively. • Obtaining better or as well results in terms of objective values. - Abstract: Recently, cogeneration units have played an increasingly important role in the utility industry. Therefore the optimal utilization of multiple combined heat and power (CHP) systems is an important optimization task in power system operation. Unlike power economic dispatch, which has a single equality constraint, two equality constraints must be met in combined heat and power economic dispatch (CHPED) problem. Moreover, in the cogeneration units, the power capacity limits are functions of the unit heat productions and the heat capacity limits are functions of the unit power generations. Thus, CHPED is a complicated optimization problem. In this paper, an algorithm based on Benders decomposition (BD) is proposed to solve the economic dispatch (ED) problem for cogeneration systems. In the proposed method, combined heat and power economic dispatch problem is decomposed into a master problem and subproblem. The subproblem generates the Benders cuts and master problem uses them as a new inequality constraint which is added to the previous constraints. The iterative process will continue until upper and lower bounds of the objective function optimal values are close enough and a converged optimal solution is found. Benders decomposition based approach is able to provide a good framework to consider the non-convex feasible operation regions of cogeneration units efficiently. In this paper, a four-unit system with two cogeneration units and a five-unit system with three cogeneration units are analyzed to exhibit the effectiveness of the proposed approach. In all cases, the

  12. A combined energetic and economic approach for the sustainable design of geothermal plants

    International Nuclear Information System (INIS)

    Franco, Alessandro; Vaccaro, Maurizio

    2014-01-01

    Highlights: • Exploitation of medium to low temperature geothermal sources: ORC power plants. • Integrated energetic and economic approach for the analysis of geothermal power plants. • A brief overview of the cost items of geothermal power plants. • Analysis of specific cost of geothermal power plants based on the method proposed. • Analysis of sustainability of geothermal energy systems based on resource durability. - Abstract: The perspectives of future development of geothermal power plants, mainly of small size for the exploitation of medium–low temperature reservoirs, are discussed and analyzed in the present paper. Even if there is a general interest in new power plants and investments in this sector are recognized, the new installations are reduced; the apparent advantage of null cost of the energy source is negatively balanced by the high drilling and installation costs. A key element for the design of a geothermal plant for medium temperature geothermal source is the definition of the power of the plant (size): this is important in order to define not only the economic plan but also the durability of the reservoir. Considering that it is not possible that the development of geothermal industry could be driven only by an economic perspective, the authors propose a method for joining energetic and economic approaches. The result of the combined energetic and economic analysis is interesting particularly in case of Organic Rankine Cycle (ORC) power plants in order to define a suitable and optimal size and to maximize the resource durability. The method is illustrated with reference to some particular case studies, showing that the sustainability of small size geothermal plants will be approached only if the research for more economic solutions will be combined with efforts in direction of efficiency increase

  13. Combined transoral and endoscopic approach for total maxillectomy: a pioneering report.

    Science.gov (United States)

    Liu, Zhuofu; Yu, Huapeng; Wang, Dehui; Wang, Jingjing; Sun, Xicai; Liu, Juan

    2013-06-01

    Total maxillectomy is sometimes necessary especially for malignant tumors originating from the maxillary sinus. Here we describe a combined transoral and endoscopic approach for total maxillectomy for the treatment of malignant maxillary sinus tumors and evaluate its short-term outcome. This approach was evaluated in terms of the physiological function, aesthetic outcome, and complications. Six patients underwent the above-mentioned approach for resection of malignant maxillary sinus tumors from May 2010 to June 2011. This combined transoral and endoscopic approach includes five basic steps: total sphenoethmoidectomy, sublabial incision, incision of the frontal process of the maxilla, incision of the zygomaticomaxillary fissure, and hard palate osteotomy. All patients with malignant maxillary sinus tumors successfully underwent the planned total endoscopic maxillectomy without the need for facial incision or transfixion of the nasal septum; there were no significant complications. Five patients received preoperative radiation therapy. All patients were well and had no recurrence at follow-up from 13 to 27 months. The combined approach is feasible and can be performed in carefully selected patients. The benefit of the absence of facial incisions or transfixion of the nasal septum, potential improvement in hemostasis, and visual magnification may help to decrease the morbidity of traditional open approaches.

  14. Combining Pathway Identification and Breast Cancer Survival Prediction via Screening-Network Methods

    Directory of Open Access Journals (Sweden)

    Antonella Iuliano

    2018-06-01

    Full Text Available Breast cancer is one of the most common invasive tumors causing high mortality among women. It is characterized by high heterogeneity regarding its biological and clinical characteristics. Several high-throughput assays have been used to collect genome-wide information for many patients in large collaborative studies. This knowledge has improved our understanding of its biology and led to new methods of diagnosing and treating the disease. In particular, system biology has become a valid approach to obtain better insights into breast cancer biological mechanisms. A crucial component of current research lies in identifying novel biomarkers that can be predictive for breast cancer patient prognosis on the basis of the molecular signature of the tumor sample. However, the high dimension and low sample size of data greatly increase the difficulty of cancer survival analysis demanding for the development of ad-hoc statistical methods. In this work, we propose novel screening-network methods that predict patient survival outcome by screening key survival-related genes and we assess the capability of the proposed approaches using METABRIC dataset. In particular, we first identify a subset of genes by using variable screening techniques on gene expression data. Then, we perform Cox regression analysis by incorporating network information associated with the selected subset of genes. The novelty of this work consists in the improved prediction of survival responses due to the different types of screenings (i.e., a biomedical-driven, data-driven and a combination of the two before building the network-penalized model. Indeed, the combination of the two screening approaches allows us to use the available biological knowledge on breast cancer and complement it with additional information emerging from the data used for the analysis. Moreover, we also illustrate how to extend the proposed approaches to integrate an additional omic layer, such as copy number

  15. Teamwork: improved eQTL mapping using combinations of machine learning methods.

    Directory of Open Access Journals (Sweden)

    Marit Ackermann

    Full Text Available Expression quantitative trait loci (eQTL mapping is a widely used technique to uncover regulatory relationships between genes. A range of methodologies have been developed to map links between expression traits and genotypes. The DREAM (Dialogue on Reverse Engineering Assessments and Methods initiative is a community project to objectively assess the relative performance of different computational approaches for solving specific systems biology problems. The goal of one of the DREAM5 challenges was to reverse-engineer genetic interaction networks from synthetic genetic variation and gene expression data, which simulates the problem of eQTL mapping. In this framework, we proposed an approach whose originality resides in the use of a combination of existing machine learning algorithms (committee. Although it was not the best performer, this method was by far the most precise on average. After the competition, we continued in this direction by evaluating other committees using the DREAM5 data and developed a method that relies on Random Forests and LASSO. It achieved a much higher average precision than the DREAM best performer at the cost of slightly lower average sensitivity.

  16. [A cloud detection algorithm for MODIS images combining Kmeans clustering and multi-spectral threshold method].

    Science.gov (United States)

    Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei

    2011-04-01

    An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.

  17. TIPS Placement via Combined Transjugular and Transhepatic Approach for Cavernous Portal Vein Occlusion: Targeted Approach

    Directory of Open Access Journals (Sweden)

    Natanel Jourabchi

    2013-01-01

    Full Text Available Purpose. We report a novel technique which aided recanalization of an occluded portal vein for transjugular intrahepatic portosystemic shunt (TIPS creation in a patient with symptomatic portal vein thrombosis with cavernous transformation. Some have previously considered cavernous transformation a contraindication to TIPS. Case Presentation. 62-year-old man with chronic pancreatitis, portal vein thrombosis, portal hypertension and recurrent variceal bleeding presents with melena and hematemesis. The patient was severely anemic, hemodynamically unstable, and required emergent portal decompression. Attempts to recanalize the main portal vein using traditional transjugular access were unsuccessful. After percutaneous transhepatic right portal vein access and navigation of a wire through the occluded main portal vein, an angioplasty balloon was inflated at the desired site of shunt takeoff. The balloon was targeted and punctured from the transjugular approach, and a wire was passed into the portal system. TIPS placement then proceeded routinely. Conclusion. Although occlusion of the portal vein increases difficulty of performing TIPS, it should not be considered an absolute contraindication. We have described a method for recanalizing an occluded portal vein using a combined transhepatic and transjugular approach for TIPS. This approach may be useful to relieve portal hypertension in patients who fail endoscopic and/or surgical therapies.

  18. The EDIE method – towards an approach to collaboration-based persuasive design

    DEFF Research Database (Denmark)

    Hansen, Sandra Burri Gram

    2016-01-01

    This paper presents the initial steps towards a collaboration-based method for persuasive design – the EDIE method (Explore, Design, Implement, Evaluate). The method is inspired by Design-Based Research, but developed to combine different design approaches that have dominated the persuasive...... technology field over the past decade. The rhetorical notion of Kairos is considered a key element in the EDIE method, resulting in a distinct focus on participatory design and constructive ethics. The method is explained through a practical example of developing persuasive learning designs in collaboration...

  19. Combining multiple FDG-PET radiotherapy target segmentation methods to reduce the effect of variable performance of individual segmentation methods

    Energy Technology Data Exchange (ETDEWEB)

    McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)

    2013-04-15

    Purpose: Many approaches have been proposed to segment high uptake objects in 18F-fluoro-deoxy-glucose positron emission tomography images but none provides consistent performance across the large variety of imaging situations. This study investigates the use of two methods of combining individual segmentation methods to reduce the impact of inconsistent performance of the individual methods: simple majority voting and probabilistic estimation. Methods: The National Electrical Manufacturers Association image quality phantom containing five glass spheres with diameters 13-37 mm and two irregularly shaped volumes (16 and 32 cc) formed by deforming high-density polyethylene bottles in a hot water bath were filled with 18-fluoro-deoxyglucose and iodine contrast agent. Repeated 5-min positron emission tomography (PET) images were acquired at 4:1 and 8:1 object-to-background contrasts for spherical objects and 4.5:1 and 9:1 for irregular objects. Five individual methods were used to segment each object: 40% thresholding, adaptive thresholding, k-means clustering, seeded region-growing, and a gradient based method. Volumes were combined using a majority vote (MJV) or Simultaneous Truth And Performance Level Estimate (STAPLE) method. Accuracy of segmentations relative to CT ground truth volumes were assessed using the Dice similarity coefficient (DSC) and the symmetric mean absolute surface distances (SMASDs). Results: MJV had median DSC values of 0.886 and 0.875; and SMASD of 0.52 and 0.71 mm for spheres and irregular shapes, respectively. STAPLE provided similar results with median DSC of 0.886 and 0.871; and median SMASD of 0.50 and 0.72 mm for spheres and irregular shapes, respectively. STAPLE had significantly higher DSC and lower SMASD values than MJV for spheres (DSC, p < 0.0001; SMASD, p= 0.0101) but MJV had significantly higher DSC and lower SMASD values compared to STAPLE for irregular shapes (DSC, p < 0.0001; SMASD, p= 0.0027). DSC was not significantly

  20. 4th Workshop on Combinations of Intelligent Methods and Applications

    CERN Document Server

    Palade, Vasile; Prentzas, Jim

    2016-01-01

    This volume includes extended and revised versions of the papers presented at the 4th Workshop on “Combinations of Intelligent Methods and Applications” (CIMA 2014) which was intended to become a forum for exchanging experience and ideas among researchers and practitioners dealing with combinations of different intelligent methods in Artificial Intelligence. The aim is to create integrated or hybrid methods that benefit from each of their components. Some of the existing presented efforts combine soft computing methods (fuzzy logic, neural networks and genetic algorithms). Another stream of efforts integrates case-based reasoning or machine learning with soft-computing methods. Some of the combinations have been more widely explored, like neuro-symbolic methods, neuro-fuzzy methods and methods combining rule-based and case-based reasoning. CIMA 2014 was held in conjunction with the 26th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2014). .

  1. Combining Unsupervised and Supervised Statistical Learning Methods for Currency Exchange Rate Forecasting

    OpenAIRE

    Vasiljeva, Polina

    2016-01-01

    In this thesis we revisit the challenging problem of forecasting currency exchange rate. We combine machine learning methods such as agglomerative hierarchical clustering and random forest to construct a two-step approach for predicting movements in currency exchange prices of the Swedish krona and the US dollar. We use a data set with over 200 predictors comprised of different financial and macro-economic time series and their transformations. We perform forecasting for one week ahead with d...

  2. Approaches to modernize the combination drug development paradigm

    Directory of Open Access Journals (Sweden)

    Daphne Day

    2016-10-01

    Full Text Available Abstract Recent advances in genomic sequencing and omics-based capabilities are uncovering tremendous therapeutic opportunities and rapidly transforming the field of cancer medicine. Molecularly targeted agents aim to exploit key tumor-specific vulnerabilities such as oncogenic or non-oncogenic addiction and synthetic lethality. Additionally, immunotherapies targeting the host immune system are proving to be another promising and complementary approach. Owing to substantial tumor genomic and immunologic complexities, combination strategies are likely to be required to adequately disrupt intricate molecular interactions and provide meaningful long-term benefit to patients. To optimize the therapeutic success and application of combination therapies, systematic scientific discovery will need to be coupled with novel and efficient clinical trial approaches. Indeed, a paradigm shift is required to drive precision medicine forward, from the traditional “drug-centric” model of clinical development in pursuit of small incremental benefits in large heterogeneous groups of patients, to a “strategy-centric” model to provide customized transformative treatments in molecularly stratified subsets of patients or even in individual patients. Crucially, to combat the numerous challenges facing combination drug development—including our growing but incomplete understanding of tumor biology, technical and informatics limitations, and escalating financial costs—aligned goals and multidisciplinary collaboration are imperative to collectively harness knowledge and fuel continual innovation.

  3. Rectal duplication cyst: a combined abdominal and endoanal operative approach.

    Science.gov (United States)

    Rees, Clare M; Woodward, Mark; Grier, David; Cusick, Eleri

    2007-04-01

    Rectal duplication cysts are rare, comprising duplications. Early excision is the treatment of choice and a number of surgical approaches have been described. We present a 3-week-old infant with a 3 cm cyst that was excised using a previously unreported combined abdominal and endoanal approach.

  4. A new approach combining analytical methods for workplace exposure assessment of inhalable multi-walled carbon nanotubes

    NARCIS (Netherlands)

    Tromp, P.C.; Kuijpers, E.; Bekker, C.; Godderis, L.; Lan, Q.; Jedynska, A.D.; Vermeulen, R.; Pronk, A.

    2017-01-01

    To date there is no consensus about the most appropriate analytical method for measuring carbon nanotubes (CNTs), hampering the assessment and limiting the comparison of data. The goal of this study is to develop an approach for the assessment of the level and nature of inhalable multi-wall CNTs

  5. Nanotechnology-based combinational drug delivery: an emerging approach for cancer therapy.

    Science.gov (United States)

    Parhi, Priyambada; Mohanty, Chandana; Sahoo, Sanjeeb Kumar

    2012-09-01

    Combination therapy for the treatment of cancer is becoming more popular because it generates synergistic anticancer effects, reduces individual drug-related toxicity and suppresses multi-drug resistance through different mechanisms of action. In recent years, nanotechnology-based combination drug delivery to tumor tissues has emerged as an effective strategy by overcoming many biological, biophysical and biomedical barriers that the body stages against successful delivery of anticancer drugs. The sustained, controlled and targeted delivery of chemotherapeutic drugs in a combination approach enhanced therapeutic anticancer effects with reduced drug-associated side effects. In this article, we have reviewed the scope of various nanotechnology-based combination drug delivery approaches and also summarized the current perspective and challenges facing the successful treatment of cancer. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Combined methods of tolerance increasing for embedded SRAM

    Science.gov (United States)

    Shchigorev, L. A.; Shagurin, I. I.

    2016-10-01

    The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.

  7. On Combining Elements of Different Ways of Learning, Methods and Knowledge

    Directory of Open Access Journals (Sweden)

    Dušana Findeisen

    2013-12-01

    Full Text Available The paper deals with different thinkers' attitude towards methods in adult education. It examines the value of some elements of »trial and error learning« and »non-directive learning«. Like a multifaceted approach based on elements drawn from different methods, the way we learn can also be eclectic.  To illustrate this assertion, the author analyses the »anti method« used by Maurice Pialat, a French film director, contrasting it with methods in which the aim is set in advance and the process leading towards it is organised in sequences. This is most often the case in script-based shooting of films, directing a theatre performance or running adult education. Moreover, the author argues that learning about how to do something is combined with learning about how to be. She further emphasises that methods should not be used to impose one’s knowledge and one’s reality on the learner, thus destroying circumstances necessary for gaining or creating knowledge.

  8. A combined usage of stochastic and quantitative risk assessment methods in the worksites: Application on an electric power provider

    International Nuclear Information System (INIS)

    Marhavilas, P.K.; Koulouriotis, D.E.

    2012-01-01

    An individual method cannot build either a realistic forecasting model or a risk assessment process in the worksites, and future perspectives should focus on the combined forecasting/estimation approach. The main purpose of this paper is to gain insight into a risk prediction and estimation methodological framework, using the combination of three different methods, including the proportional quantitative-risk-assessment technique (PRAT), the time-series stochastic process (TSP), and the method of estimating the societal-risk (SRE) by F–N curves. In order to prove the usefulness of the combined usage of stochastic and quantitative risk assessment methods, an application on an electric power provider industry is presented to, using empirical data.

  9. Combining Approach in Stages with Least Squares for fits of data in hyperelasticity

    Science.gov (United States)

    Beda, Tibi

    2006-10-01

    The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).

  10. Combining Semantic and Lexical Methods for Mapping MedDRA to VCM Icons.

    Science.gov (United States)

    Lamy, Jean-Baptiste; Tsopra, Rosy

    2018-01-01

    VCM (Visualization of Concept in Medicine) is an iconic language that represents medical concepts, such as disorders, by icons. VCM has a formal semantics described by an ontology. The icons can be used in medical software for providing a visual summary or enriching texts. However, the use of VCM icons in user interfaces requires to map standard medical terminologies to VCM. Here, we present a method combining semantic and lexical approaches for mapping MedDRA to VCM. The method takes advantage of the hierarchical relations in MedDRA. It also analyzes the groups of lemmas in the term's labels, and relies on a manual mapping of these groups to the concepts in the VCM ontology. We evaluate the method on 50 terms. Finally, we discuss the method and suggest perspectives.

  11. Management of interstitial ectopic pregnancies with a combined intra-amniotic and systemic approach.

    Science.gov (United States)

    Swank, Morgan L; Harken, Tabetha R; Porto, Manuel

    2013-08-01

    Approximately 2% of all pregnancies are ectopic; of these, 4% are interstitial or cervical. There exists no clear consensus as to whether surgical or medical management is superior. We present three cases of advanced nonfallopian tube ectopic pregnancies from 6 to 8 weeks of gestation. Our first two cases were managed with a combined intrafetal, intra-amniotic and systemic approach using methotrexate and potassium chloride, whereas our third case was managed with an intra-amniotic approach alone. Our combined approach cases were successful, with resolution of human chorionic gonadotropin in 50 and 34 days, whereas our single approach case re-presented with bleeding requiring uterine artery embolization and operative removal of products of conception. Patients presenting with advanced interstitial or cervical pregnancies who are clinically stable can be offered medical management with a combined approach.

  12. Combined methods for elliptic equations with singularities, interfaces and infinities

    CERN Document Server

    Li, Zi Cai

    1998-01-01

    In this book the author sets out to answer two important questions: 1. Which numerical methods may be combined together? 2. How can different numerical methods be matched together? In doing so the author presents a number of useful combinations, for instance, the combination of various FEMs, the combinations of FEM-FDM, REM-FEM, RGM-FDM, etc. The combined methods have many advantages over single methods: high accuracy of solutions, less CPU time, less computer storage, easy coupling with singularities as well as the complicated boundary conditions. Since coupling techniques are essential to combinations, various matching strategies among different methods are carefully discussed. The author provides the matching rules so that optimal convergence, even superconvergence, and optimal stability can be achieved, and also warns of the matching pitfalls to avoid. Audience: The book is intended for both mathematicians and engineers and may be used as text for advanced students.

  13. Combined endoscopic approach in the management of suprasellar craniopharyngioma.

    Science.gov (United States)

    Deopujari, Chandrashekhar E; Karmarkar, Vikram S; Shah, Nishit; Vashu, Ravindran; Patil, Rahul; Mohanty, Chandan; Shaikh, Salman

    2018-05-01

    Craniopharyngiomas are dysontogenic tumors with benign histology but aggressive behavior. The surgical challenges posed by the tumor are well recognized. Neuroendoscopy has recently contributed to its surgical management. This study focuses on our experience in managing craniopharyngiomas in recent years, highlighting the role of combined endoscopic trans-ventricular and endonasal approach. Ninety-two patients have been treated for craniopharyngioma from 2000 to 2016 by the senior author. A total of 125 procedures, microsurgical (58) and endoscopic (67), were undertaken. Combined endoscopic approach was carried out in 18 of these patients, 16 children and 2 young adults. All of these patients presented with a large cystic suprasellar mass associated with hydrocephalus. In the first instance, they were treated with a transventricular endoscopic procedure to decompress the cystic component. This was followed by an endonasal transsphenoidal procedure for excision within the next 2 to 6 days. All these patients improved after the initial cyst decompression with relief of hydrocephalus while awaiting remaining tumor removal in a more elective setting. Gross total resection could be done in 84% of these patients. Diabetes insipidus was the most common postsurgical complication seen in 61% patients in the immediate period but was persistent in only two patients at 1-year follow-up. None of the children in this group developed morbid obesity. There was one case of CSF leak requiring repair after initial surgery. Peri-operative mortality was seen in one patient secondary to ventriculitis. The patients who benefit most from the combined approach are those who present with raised intracranial pressure secondary to a large tumor with cyst causing hydrocephalus. Intraventricular endoscopic cyst drainage allows resolution of hydrocephalus with restoration of normal intracranial pressure, gives time for proper preoperative work up, and has reduced incidence of CSF leak after

  14. A Novel in situ Trigger Combination Method

    International Nuclear Information System (INIS)

    Buzatu, Adrian; Warburton, Andreas; Krumnack, Nils; Yao, Wei-Ming

    2012-01-01

    Searches for rare physics processes using particle detectors in high-luminosity colliding hadronic beam environments require the use of multi-level trigger systems to reject colossal background rates in real time. In analyses like the search for the Higgs boson, there is a need to maximize the signal acceptance by combining multiple different trigger chains when forming the offline data sample. In such statistically limited searches, datasets are often amassed over periods of several years, during which the trigger characteristics evolve and their performance can vary significantly. Reliable production cross-section measurements and upper limits must take into account a detailed understanding of the effective trigger inefficiency for every selected candidate event. We present as an example the complex situation of three trigger chains, based on missing energy and jet energy, to be combined in the context of the search for the Higgs (H) boson produced in association with a W boson at the Collider Detector at Fermilab (CDF). We briefly review the existing techniques for combining triggers, namely the inclusion, division, and exclusion methods. We introduce and describe a novel fourth in situ method whereby, for each candidate event, only the trigger chain with the highest a priori probability of selecting the event is considered. The in situ combination method has advantages of scalability to large numbers of differing trigger chains and of insensitivity to correlations between triggers. We compare the inclusion and in situ methods for signal event yields in the CDF WH search.

  15. Hypnotherapy: A Combined Approach Using Psychotherapy and Behavior Modification.

    Science.gov (United States)

    Goldberg, Bruce

    1987-01-01

    Discusses use of hypnosis in traditional psychoanalysis, compares use of hypnosis in behavior modification therapy versus psychoanalysis, and presents a hypno-behavioral model which combines both approaches using hypnosis as the medium. (Author/NB)

  16. Combined SAFE/SNAP approach to safeguards evaluation

    International Nuclear Information System (INIS)

    Engi, D.; Chapman, L.D.; Grant, F.H.; Polito, J.

    1980-01-01

    The scope of a safeguards evaluation model can efficiently address one of two issues: (1) global safeguards effectiveness or (2) vulnerability analysis for individual scenarios. The Safeguards Automated Facility Evaluation (SAFE) focuses on the first issue, while the Safeguards Network Analysis Procedure (SNAP) is directed towards the second. A combined SAFE/SNAP approach to the problem of safeguards evaluation is described and illustrated through an example. 4 refs

  17. A facial approach combining photosensitive sol–gel with self-assembly method to fabricate superhydrophobic TiO{sub 2} films with patterned surface structure

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongfan, E-mail: duanzf@xaut.edu.cn [School of Materials Science and Engineering, Xi’an University of Technology, Xi’an 710048 (China); Shaanxi Key Laboratory of Electrical Materials and Infiltration Technology, Xi’an 710048 (China); Zhao, Zhen; Luo, Dan; Zhao, Maiqun [School of Materials Science and Engineering, Xi’an University of Technology, Xi’an 710048 (China); Zhao, Gaoyang, E-mail: Zhaogy@xaut.edu.cn [School of Materials Science and Engineering, Xi’an University of Technology, Xi’an 710048 (China); Shaanxi Key Laboratory of Electrical Materials and Infiltration Technology, Xi’an 710048 (China)

    2016-01-01

    Graphical abstract: - Highlights: • Patterned TiO{sub 2} films were prepared by photosensitive sol–gel method. • Surface had quasi micro-lens array structure, leading to superhydrophobicity. • The surface with the lowest period exhibited the highest contact angel of 163°. • UV irradiation induced the conversion to superhydrophilicity. - Abstract: Superhydrophobic TiO{sub 2} films with micro-patterned surface structure was prepared through a facial approach combining photosensitive sol–gel method with following surface modification by 1H,1H,2H,2H-perfluorooctyltrichlorosilane (PFOTCS). The patterned surface possessed quasi micro-lens array structure resembling processus mastoideus of lotus, leading to excellent hydrophobicity. The relationship between hydrophobic performance and the period of the micro-patterned TiO{sub 2} surface was investigated. The water contact angles (CAs) of micro-patterned TiO{sub 2} surface increased with the decrease of the periods, and the patterned surface with the lowest period of 0.83 μm showed the highest CA of 163°. It suggests that this approach would offer an advantage to control the wettability properties of superhydrophobic surfaces by adjusting the fine pattern structure. Furthermore, the superhydrophobic state could be converted to the state of superhydrophilicity under ultraviolet (UV) illumination as a result of the photocatalytic decomposition of the PFOTCS monolayer on the micro-patterned TiO{sub 2} Surface.

  18. Combined Teaching Method: An Experimental Study

    Science.gov (United States)

    Kolesnikova, Iryna V.

    2016-01-01

    The search for the best approach to business education has led educators and researchers to seek many different teaching strategies, ranging from the traditional teaching methods to various experimental approaches such as active learning techniques. The aim of this experimental study was to compare the effects of the traditional and combined…

  19. A combined ADER-DG and PML approach for simulating wave propagation in unbounded domains

    KAUST Repository

    Amler, Thomas

    2012-09-19

    In this work, we present a numerical approach for simulating wave propagation in unbounded domains which combines discontinuous Galerkin methods with arbitrary high order time integration (ADER-DG) and a stabilized modification of perfectly matched layers (PML). Here, the ADER-DG method is applied to Bérenger’s formulation of PML. The instabilities caused by the original PML formulation are treated by a fractional step method that allows to monitor whether waves are damped in PML region. In grid cells where waves are amplified by the PML, the contribution of damping terms is neglected and auxiliary variables are reset. Results of 2D simulations in acoustic media with constant and discontinuous material parameters are presented to illustrate the performance of the method.

  20. A new approach for peat inventory methods; Turvetutkimusten menetelmaekehitystarkastelu

    Energy Technology Data Exchange (ETDEWEB)

    Laatikainen, M.; Leino, J.; Lerssi, J.; Torppa, J.; Turunen, J. Email: jukka.turunen@gtk.fi

    2011-07-01

    Development of the new peatland inventory method started in 2009. There was a need to investigate whether new methods and tools could be developed cost-effectively so field inventory work would more completely cover the whole peatland area and the quality and liability of the final results would remain at a high level. The old inventory method in place at the Geological Survey of Finland (GTK) is based on the main transect and cross transect approach across a peatland area. The goal of this study was to find a practical grid-based method linked to the geographic information system suitable for field conditions. the triangle-grid method with even distance between the study points was found to be the most suitable approach. A new Ramac-ground penetrating radar was obtained by the GTK in 2009, and it was concluded in the study of new peatland inventory methods. This radar model is relatively light and very suitable, for example, to the forestry drained peatlands, which are often difficult to cross because of the intensive ditch network. the goal was to investigate the best working methods for the ground penetrating radar to optimize its use in the large-scale peatland inventory. Together with the new field inventory methods, a novel interpolation-based method (MITTI) for modelling peat depths was developed. MITTI makes it possible to take advantage of all the available peat-depth data including, at the moment, aerogeophysical and ground penetrating radar measurements, drilling data and the mire outline. The characteristic uncertainties of each data type are taken into account and, in addition to the depth model itself, an uncertainty map of the model is computed. Combined with the grid-based field inventory method, this multi-approach provides better tools to more accurately estimate the peat depths, peat amounts and peat type distributions. The development of the new peatland inventory method was divided into four separate sections: (1) Development of new field

  1. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi; Presho, Michael; Calo, Victor M.; Efendiev, Yalchin R.

    2013-01-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  2. Mode decomposition methods for flows in high-contrast porous media. Global-local approach

    KAUST Repository

    Ghommem, Mehdi

    2013-11-01

    In this paper, we combine concepts of the generalized multiscale finite element method (GMsFEM) and mode decomposition methods to construct a robust global-local approach for model reduction of flows in high-contrast porous media. This is achieved by implementing Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) techniques on a coarse grid computed using GMsFEM. The resulting reduced-order approach enables a significant reduction in the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider a variety of high-contrast coefficients and present the corresponding numerical results to illustrate the effectiveness of the proposed technique. This paper is a continuation of our work presented in Ghommem et al. (2013) [1] where we examine the applicability of POD and DMD to derive simplified and reliable representations of flows in high-contrast porous media on fully resolved models. In the current paper, we discuss how these global model reduction approaches can be combined with local techniques to speed-up the simulations. The speed-up is due to inexpensive, while sufficiently accurate, computations of global snapshots. © 2013 Elsevier Inc.

  3. BEPU methods and combining of uncertainties

    International Nuclear Information System (INIS)

    Prosek, A.; Mavko, B.

    2004-01-01

    After approval of the revised rule on the acceptance of emergency core cooling system (ECCS) performance in 1988 there has been significant interest in the development of codes and methodologies for best-estimate loss-of-coolant accident (LOCAs) analyses. The Code Scaling, Applicability and Uncertainty (CSAU) evaluation method was developed and demonstrated for large-break (LB) LOCA in a pressurized water reactor. Later several new best estimate plus uncertainty methods (BEPUs) were developed in the world. The purpose of the paper is to identify and compare the statistical approaches of BEPU methods and present their important plant and licensing applications. The study showed that uncertainty analysis with random sampling of input parameters and the use of order statistics for desired tolerance limits of output parameters is today commonly accepted approach. The existing BEPU methods seems mature enough while the future research may be focused on the codes with internal assessment of uncertainty. (author)

  4. Degradation of 2,4-dichlorophenol using combined approach based on ultrasound, ozone and catalyst.

    Science.gov (United States)

    Barik, Arati J; Gogate, Parag R

    2017-05-01

    The present work investigates the application of ultrasound and ozone operated individually and in combination with catalyst (ZnO and CuO) for establishing the possible synergistic effects for the degradation of 2,4-dichlorophenol. The dependency of extent of degradation on the operating parameters like temperature (over the range of 30-36°C), initial pH (3-9), catalyst as ZnO (loading of 0.025-0.15g/L) and CuO (loading of 0.02-0.1g/L) and initial concentration of 2,4-DCP (20-50ppm) has been established to maximize the efficacy of ultrasound (US) induced degradation. Using only US, the maximum degradation of 2,4-DCP obtained was 28.85% under optimized conditions of initial concentration as 20ppm, pH of 5 and temperature of 34°C. Study of effect of ozone flow rate for approach of only ozone revealed that maximum degradation was obtained at 400mg/h ozone flow rate. The combined approaches such as US+O 3 , US+ZnO, US+CuO, O 3 +ZnO, O 3 +CuO, US+O 3 +ZnO and US+O 3 +CuO have been subsequently investigated under optimized conditions and observed to be more efficient as compared to individual approaches. The maximum extent of degradation for the combined operation of US+O 3 (400mg/h)+ZnO (0.1g/L) and US+O 3 (400mg/h)+CuO (0.08g/L) has been obtained as 95.66% and 97.03% respectively. The degradation products of 2,4-DCP have been identified using GC-MS analysis and the toxicity analysis has also been performed based on the anti-microbial activity test (agar-well diffusion method) for the different treatment strategies. The present work has conclusively established that the combined approach of US+O 3 +CuO was the most efficient treatment scheme resulting in near complete degradation of 2,4-DCP with production of less toxic intermediates. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. A combined approach of AHP and TOPSIS methods applied in the field of integrated software systems

    Science.gov (United States)

    Berdie, A. D.; Osaci, M.; Muscalagiu, I.; Barz, C.

    2017-05-01

    Adopting the most appropriate technology for developing applications on an integrated software system for enterprises, may result in great savings both in cost and hours of work. This paper proposes a research study for the determination of a hierarchy between three SAP (System Applications and Products in Data Processing) technologies. The technologies Web Dynpro -WD, Floorplan Manager - FPM and CRM WebClient UI - CRM WCUI are multi-criteria evaluated in terms of the obtained performances through the implementation of the same web business application. To establish the hierarchy a multi-criteria analysis model that combines the AHP (Analytic Hierarchy Process) and the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods was proposed. This model was built with the help of the SuperDecision software. This software is based on the AHP method and determines the weights for the selected sets of criteria. The TOPSIS method was used to obtain the final ranking and the technologies hierarchy.

  6. Method for accelerated aging under combined environmental stress conditions

    International Nuclear Information System (INIS)

    Gillen, K.T.

    1979-01-01

    An accelerated aging method which can be used to simulate aging in combined stress environment situations is described. It is shown how the assumptions of the method can be tested experimentally. Aging data for a chloroprene cable jacketing material in single and combined radiation and temperature environments are analyzed and it is shown that these data offer evidence for the validity of the method

  7. Combined approach for gynecomastia.

    Science.gov (United States)

    El-Sabbagh, Ahmed Hassan

    2016-01-01

    Gynecomastia is a deformity of male chest. Treatment of gynecomastia varied from direct surgical excision to other techniques (mainly liposuction) to a combination of both. Skin excision is done according to the grade. In this study, experience of using liposuction adjuvant to surgical excision was described. Between September 2012 and April 2015, a total of 14 patients were treated with liposuction and surgical excision through a periareolar incision. Preoperative evaluation was done in all cases to exclude any underlying cause of gynecomastia. All fourteen patients were treated bilaterally (28 breast tissues). Their ages ranged between 13 and 33 years. Two patients were classified as grade I, and four as grade IIa, IIb or III, respectively. The first 3 patients showed seroma. Partial superficial epidermolysis of areola occurred in 2 cases. Superficial infection of incision occurred in one case and was treated conservatively. All grades of gynecomastia were managed by the same approach. Skin excision was added to a patient that had severe skin excess with limited activity and bad skin complexion. No cases required another setting or asked for 2(nd) opinion.

  8. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    International Nuclear Information System (INIS)

    Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter

    2012-01-01

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  9. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Brunet, Robert; Cortes, Daniel [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Guillen-Gosalbez, Gonzalo [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Jimenez, Laureano [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Boer, Dieter [Departament d' Enginyeria Mecanica, Escola Tecnica Superior d' Enginyeria, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007, Tarragona (Spain)

    2012-12-15

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: Black-Right-Pointing-Pointer Novel framework for the optimal design of thermdoynamic cycles. Black-Right-Pointing-Pointer Combined use of simulation and optimization tools. Black-Right-Pointing-Pointer Optimal design and operating conditions according to several economic and LCA impacts. Black-Right-Pointing-Pointer Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  10. The combined theoretical and experimental approach to arrive at optimum parameters in friction stir welding

    Science.gov (United States)

    Jagadeesha, C. B.

    2017-12-01

    Even though friction stir welding was invented long back (1991) by TWI England, till now there has no method or procedure or approach developed, which helps to obtain quickly optimum or exact parameters yielding good or sound weld. An approach has developed in which an equation has been derived, by which approximate rpm can be obtained and by setting range of rpm ±100 or 50 rpm over approximate rpm and by setting welding speed equal to 60 mm/min or 50 mm/min one can conduct FSW experiment to reach optimum parameters; one can reach quickly to optimum parameters, i.e. desired rpm, and welding speed, which yield sound weld by the approach. This approach can be effectively used to obtain sound welds for all similar and dissimilar combinations of materials such as Steel, Al, Mg, Ti, etc.

  11. Prognostic factors in invasive bladder carcinoma treated by combined modality protocol (organ-sparing approach)

    International Nuclear Information System (INIS)

    Matos, Tadeja; Cufer, Tanja; Cervek, Jozica; Borstnar, Simona; Kragelj, Borut; Zumer-Pregelj, Mirjana

    2000-01-01

    Purpose: The results of bladder sparing approach for the treatment of muscle-invasive bladder cancer, using a combination of transurethral resection (TUR), chemotherapy, and radiotherapy, are encouraging. The survival of patients treated by this method is similar to the survival of patients treated by radical cystectomy. The aim of our study was to find out which pretreatment characteristics influence the survival of patients treated by organ sparing approach that would enable us to identify the patients most suitable for this type of treatment. Methods and Materials: The prognostic value of different factors, such as age, gender, performance status, hemoglobin level, clinical stage, histologic grade, presence of obstructive uropathy, and completeness of TUR, has been studied in 105 patients with invasive bladder cancer, who received a bladder sparing treatment in the period from 1988 to 1995. They were treated with a combination of TUR, followed by 2-4 cycles of methotrexate, cisplatinum, and vinblastine polychemotherapy. In complete responders the treatment was completed by radiotherapy (50 Gy to the bladder and 40 Gy to the regional lymph nodes), whereas nonresponders underwent cystectomy whenever feasible. Results: Our study has confirmed an independent prognostic value of performance status, histologic grade, and obstructive uropathy, for the disease-specific survival (DSS) of bladder cancer patients treated by a conservative approach. We believe that performance status best reflects the extent of disease and exerts significant influence on the extent and course of treatment, while obstructive uropathy is a good indicator of local spread of the disease, better than clinical T-stage. Our finding that histologic grade is one of the strongest prognostic factors shows that tumor biology also is a very important prognostic factor in patients treated by conservative approach. Conclusion: Patients with muscle-invasive bladder cancer who are most likely to benefit

  12. Comparison and combination of "direct" and fragment based local correlation methods: Cluster in molecules and domain based local pair natural orbital perturbation and coupled cluster theories

    Science.gov (United States)

    Guo, Yang; Becker, Ute; Neese, Frank

    2018-03-01

    Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.

  13. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization.

    Science.gov (United States)

    Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun

    2017-08-24

    The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.

  14. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization

    Directory of Open Access Journals (Sweden)

    Jinmeng Rao

    2017-08-01

    Full Text Available The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.

  15. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization

    Science.gov (United States)

    Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun

    2017-01-01

    The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096

  16. Combining engineering and data-driven approaches

    DEFF Research Database (Denmark)

    Fischer, Katharina; De Sanctis, Gianluca; Kohler, Jochen

    2015-01-01

    Two general approaches may be followed for the development of a fire risk model: statistical models based on observed fire losses can support simple cost-benefit studies but are usually not detailed enough for engineering decision-making. Engineering models, on the other hand, require many assump...... to the calibration of a generic fire risk model for single family houses to Swiss insurance data. The example demonstrates that the bias in the risk estimation can be strongly reduced by model calibration.......Two general approaches may be followed for the development of a fire risk model: statistical models based on observed fire losses can support simple cost-benefit studies but are usually not detailed enough for engineering decision-making. Engineering models, on the other hand, require many...... assumptions that may result in a biased risk assessment. In two related papers we show how engineering and data-driven modelling can be combined by developing generic risk models that are calibrated to statistical data on observed fire events. The focus of the present paper is on the calibration procedure...

  17. Comparison of marine spatial planning methods in Madagascar demonstrates value of alternative approaches.

    Directory of Open Access Journals (Sweden)

    Thomas F Allnutt

    Full Text Available The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value. The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative

  18. Tests of Local Hadron Calibration Approaches in ATLAS Combined Beam Tests

    International Nuclear Information System (INIS)

    Grahn, Karl-Johan; Kiryunin, Andrey; Pospelov, Guennadi

    2011-01-01

    Three ATLAS calorimeters in the region of the forward crack at |η| 3.2 in the nominal ATLAS setup and a typical section of the two barrel calorimeters at |η| = 0.45 of ATLAS have been exposed to combined beam tests with single electrons and pions. Detailed shower shape studies of electrons and pions with comparisons to various Geant4 based simulations utilizing different physics lists are presented for the endcap beam test. The local hadron calibration approach as used in the full Atlas setup has been applied to the endcap beam test data. An extension of it using layer correlations has been tested with the barrel test beam data. Both methods utilize modular correction steps based on shower shape variables to correct for invisible energy inside the reconstructed clusters in the calorimeters (compensation) and for lost energy deposits outside of the reconstructed clusters (dead material and out-of-cluster deposits). Results for both methods and comparisons to Monte Carlo simulations are presented.

  19. Calculation of shielding thickness by combining the LTSN and Decomposition methods

    International Nuclear Information System (INIS)

    Borges, Volnei; Vilhena, Marco T. de

    1997-01-01

    A combination of the LTS N and Decomposition methods is reported to shielding thickness calculation. The angular flux is evaluated solving a transport problem in planar geometry considering the S N approximation, anisotropic scattering and one-group of energy. The Laplace transform is applied in the set of S N equations. The transformed angular flux is then obtained solving a transcendental equation and the angular flux is restored by the Heaviside expansion technique. The scalar flux is attained integrating the angular flux by Gaussian quadrature scheme. On the other hand, the scalar flux is linearly related to the dose rate through the mass and energy absorption coefficient. The shielding thickness is obtained solving a transcendental equation resulting from the application of the LTS N approach by the Decomposition methods. Numerical simulations are reported. (author). 6 refs., 3 tabs

  20. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    Science.gov (United States)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  1. Multiport Combined Endoscopic Approach to Nonembolized Juvenile Nasopharyngeal Angiofibroma with Parapharyngeal Extension: An Emerging Concept

    Directory of Open Access Journals (Sweden)

    Tiruchy Narayanan Janakiram

    2016-01-01

    Full Text Available Background. Surgical approaches to the parapharyngeal space (PPS are challenging by virtue of deep location and neurovascular content. Juvenile Nasopharyngeal Angiofibroma (JNA is a formidable hypervascular tumor that involves multiple compartments with increase in size. In tumors with extension to parapharyngeal space, the endonasal approach was observed to be inadequate. Combined Endoscopic Endonasal Approaches and Endoscopic Transoral Surgery (EEA-ETOS approach has provided a customized alternative of multicorridor approach to access JNA for its safe and efficient resection. Methods. The study demonstrates a case series of patients of JNA with prestyloid parapharyngeal space extension operated by endoscopic endonasal and endoscopic transoral approach for tumor excision. Results. The multiport EEA-ETOS approach was used to provide wide exposure to access JNA in parapharyngeal space. No major complications were observed. No conversion to external approach was required. Postoperative morbidity was low and postoperative scans showed no residual tumor. A one-year follow-up was maintained and there was no evidence of disease recurrence. Conclusion. Although preliminary, our experience demonstrates safety and efficacy of multiport approach in providing access to multiple compartments, facilitating total excision of JNA in selected cases.

  2. Phytophagous insects on native and non-native host plants: combining the community approach and the biogeographical approach.

    Directory of Open Access Journals (Sweden)

    Kim Meijer

    Full Text Available During the past centuries, humans have introduced many plant species in areas where they do not naturally occur. Some of these species establish populations and in some cases become invasive, causing economic and ecological damage. Which factors determine the success of non-native plants is still incompletely understood, but the absence of natural enemies in the invaded area (Enemy Release Hypothesis; ERH is one of the most popular explanations. One of the predictions of the ERH, a reduced herbivore load on non-native plants compared with native ones, has been repeatedly tested. However, many studies have either used a community approach (sampling from native and non-native species in the same community or a biogeographical approach (sampling from the same plant species in areas where it is native and where it is non-native. Either method can sometimes lead to inconclusive results. To resolve this, we here add to the small number of studies that combine both approaches. We do so in a single study of insect herbivory on 47 woody plant species (trees, shrubs, and vines in the Netherlands and Japan. We find higher herbivore diversity, higher herbivore load and more herbivory on native plants than on non-native plants, generating support for the enemy release hypothesis.

  3. 3rd Workshop on "Combinations of Intelligent Methods and Applications"

    CERN Document Server

    Palade, Vasile

    2013-01-01

    The combination of different intelligent methods is a very active research area in Artificial Intelligence (AI). The aim is to create integrated or hybrid methods that benefit from each of their components.  The 3rd Workshop on “Combinations of Intelligent Methods and Applications” (CIMA 2012) was intended to become a forum for exchanging experience and ideas among researchers and practitioners who are dealing with combining intelligent methods either based on first principles or in the context of specific applications. CIMA 2012 was held in conjunction with the 22nd European Conference on Artificial Intelligence (ECAI 2012).This volume includes revised versions of the papers presented at CIMA 2012.  .

  4. Comparison of accounting methods for business combinations

    Directory of Open Access Journals (Sweden)

    Jaroslav Sedláček

    2012-01-01

    Full Text Available The revised accounting rules applicable to business combinations in force on July1st 2009, are the result of several years efforts the convergence of U.S. and International Committee of the Financial Accounting Standards. Following the harmonization of global accounting procedures are revised and implemented also Czech accounting regulations. In our research we wanted to see how changes can affect the strategy and timing of business combinations. Comparative analysis is mainly focused on the differences between U.S. and international accounting policies and Czech accounting regulations. Key areas of analysis and synthesis are the identification of business combination, accounting methods for business combinations and goodwill recognition. The result is to assess the impact of the identified differences in the reported financial position and profit or loss of company.

  5. A linguistic approach to solving of the problem of technological adjustment of combines

    Directory of Open Access Journals (Sweden)

    Lyudmila V. Borisova

    2017-06-01

    Full Text Available Introduction: The article deals with a linguistic approach to the technological adjustment of difficult harvesters in field conditions. The short characteristic of subject domain is provided. The place of the task of adjustment of the combine harvester working bodies in harvesting is considered. Various groups of signs of the considered task are allocated: external signs of violation of quality of work, regulated parameters of the machine, and parameters of technical condition. The numerical data characterizing interrelations between external signs and parameters of the machine are provided. Materials and Methods: A combine harvester is the difficult dynamic system functioning under constantly changing external conditions. This fact imposes characteristics on the used methods of technological adjustment. Quantitative and qualitative information is used to control harvesting. Availability of different types of uncertainty in considering semantic spaces of factors of the external environment and parameters of the machine allows offering the method of technological adjustment based on an indistinct logical conclusion for the solution of the task. Results: As the analysis result, the decision making methodology for indistinct environment conditions is adapted for the studied subject domain. The generalized scheme of indistinct management of process is offered to technological adjustment of the machine. Models of the studied semantic spaces are considered. Feasibility of use of deductive and inductive conclusions of decisions for various tasks of preliminary setup and adjustment of technological adjustments is shown. The formal and logical scheme of the decision making process based on indistinct expert knowledge is offered. The scheme includes the main stages of the task solution: fazzifikation, composition and defazzifikation. The question of the quantitative assessment of expert knowledge coordination is considered. The examples of the formulation

  6. Degradation of acephate using combined ultrasonic and ozonation method

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2015-07-01

    Full Text Available The degradation of acephate in aqueous solutions was investigated with the ultrasonic and ozonation methods, as well as a combination of both. An experimental facility was designed and operation parameters such as the ultrasonic power, temperature, and gas flow rate were strictly controlled at constant levels. The frequency of the ultrasonic wave was 160 kHz. The ultraviolet-visible (UV-Vis spectroscopic and Raman spectroscopic techniques were used in the experiment. The UV-Vis spectroscopic results show that ultrasonication and ozonation have a synergistic effect in the combined system. The degradation efficiency of acephate increases from 60.6% to 87.6% after the solution is irradiated by a 160 kHz ultrasonic wave for 60 min in the ozonation process, and it is higher with the combined method than the sum of the separated ultrasonic and ozonation methods. Raman spectra studies show that degradation via the combined ultrasonic/ozonation method is more thorough than photocatalysis. The oxidability of nitrogen atoms is promoted under ultrasonic waves. Changes of the inorganic ions and degradation pathway during the degradation process were investigated in this study. Most final products are innocuous to the environment.

  7. Improving the Fine-Tuning of Metaheuristics: An Approach Combining Design of Experiments and Racing Algorithms

    Directory of Open Access Journals (Sweden)

    Eduardo Batista de Moraes Barbosa

    2017-01-01

    Full Text Available Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA, which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for fine-tuning two distinct metaheuristics: Simulated Annealing (SA and Genetic Algorithm (GA, in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.

  8. Characterization of chlorinated solvent contamination in limestone using innovative FLUTe® technologies in combination with other methods in a line of evidence approach

    DEFF Research Database (Denmark)

    Broholm, Mette Martina; Janniche, Gry Sander; Mosthaf, Klaus

    2016-01-01

    Characterization of dense non-aqueous phase liquid (DNAPL) source zones in limestone aquifers/bedrock is essential to develop accurate site-specific conceptual models and perform risk assessment. Here innovative field methods were combined to improve determination of source zone architecture......, hydrogeology and contaminant distribution. The FACT™ is a new technology and it was applied and tested at a contaminated site with a limestone aquifer, together with a number of existing methods including wire-line coring with core subsampling, FLUTe® transmissivity profiling and multilevel water sampling...... groundwater sampling (under two flow conditions) and FACT™ sampling and analysis combined with FLUTe® transmissivity profiling and modeling were used to provide a line of evidence for the presence of DNAPL, dissolved and sorbed phase contamination in the limestone fractures and matrix. The combined methods...

  9. Systematic approaches to data analysis from the Critical Decision Method

    Directory of Open Access Journals (Sweden)

    Martin Sedlár

    2015-01-01

    Full Text Available The aim of the present paper is to introduce how to analyse the qualitative data from the Critical Decision Method. At first, characterizing the method provides the meaningful introduction into the issue. This method used in naturalistic decision making research is one of the cognitive task analysis methods, it is based on the retrospective semistructured interview about critical incident from the work and it may be applied in various domains such as emergency services, military, transport, sport or industry. Researchers can make two types of methodological adaptation. Within-method adaptations modify the way of conducting the interviews and cross-method adaptations combine this method with other related methods. There are many decsriptions of conducting the interview, but the descriptions how the data should be analysed are rare. Some researchers use conventional approaches like content analysis, grounded theory or individual procedures with reference to the objectives of research project. Wong (2004 describes two approaches to data analysis proposed for this method of data collection, which are described and reviewed in the details. They enable systematic work with a large amount of data. The structured approach organizes the data according to an a priori analysis framework and it is suitable for clearly defined object of research. Each incident is studied separately. At first, the decision chart showing the main decision points and then the incident summary are made. These decision points are used to identify the relevant statements from the transcript, which are analysed in terms of the Recognition-Primed Decision Model. Finally, the results from all the analysed incidents are integrated. The limitation of the structured approach is it may not reveal some interesting concepts. The emergent themes approach helps to identify these concepts while maintaining a systematic framework for analysis and it is used for exploratory research design. It

  10. Revisiting liquid lubrication methods by means of a fully coupled approach combining plastic deformation and liquid lubrication

    DEFF Research Database (Denmark)

    Üstünyagiz, Esmeray; Christiansen, Peter; Nielsen, Chris Valentin

    2017-01-01

    This paper presents a new approach based on a fully coupled procedure in which the lubricant flow and the plasticdeformation of the metallic material in metal forming are solved simultaneously. The proposed method is an alternativeto conventional modelling techniques which allow studying the effect...... andanalytical model, and by variations in drawing speed. Good agreement is found with the experimental observations....

  11. Ergonomics, automation and logistics: practical and effective combination of working methods, a case study of a baking company.

    Science.gov (United States)

    Quintana, Leonardo; Arias, Claudia; Cordoba, Jorge; Moroy, Magda; Pulido, Jean; Ramirez, Angela

    2012-01-01

    The aim of this study was to combine three different analytical methods from three different disciplines to diagnose the ergonomic conditions, manufacturing and supply chain operation of a baking company. The study explores a summary of comprehensive working methods that combines the ergonomics, automation and logistics study methods in the diagnosis of working conditions and productivity. The participatory approach of this type of study that involves the feelings and first-hand knowledge of workers of the operation are determining factors in defining points of action and ergonomic interventions, as well as defining opportunities in the automation of manufacturing and logistics, to cope with the needs of the company. The study identified an ergonomic situation (high prevalence of wrist-hand pain), and the combination of interdisciplinary techniques applied allowed to improve this condition in the company. This type of study allows a primary basis of the opportunities presented by the combination of specialized methods of different disciplines, for the definition of comprehensive action plans for the company. Additionally, it outlines opportunities for improvement and recommendations to mitigate the burden associated with occupational diseases and as an end result improve the quality of life and productivity of workers.

  12. On the synthesis of peptide imprinted polymers by a combined suspension-Epitope polymerization method

    International Nuclear Information System (INIS)

    Kotrotsiou, O.; Chaitidou, S.; Kiparissides, C.

    2009-01-01

    In the past, molecularly imprinted polymers (MIPs), prepared by free-radical bulk polymerization, have been used for the selective recognition of small biomolecules (i.e., amino acids and amino acid derivatives). Presently, there is a need for the synthesis of MIPs capable of recognizing larger biomolecules (i.e., peptides and proteins). Moreover, it is highly desirable the production of MIP microparticles with well-defined morphological characteristics (e.g., particle size distribution, porosity, etc.) via particulate polymerization techniques. In the present study, the synthesis of molecularly imprinted microparticles, produced via the suspension and inverse suspension polymerization methods, using the 'epitope approach', is reported. The hydrophobic (i.e., Boc-Trp-Trp-Trp) or hydrophilic (i.e., His-Phe) oligo-peptides were employed as template molecules. The potential of the combined suspension polymerization method with the 'epitope approach' for the production of MIP microparticles is demonstrated, as well as the specificity and selectivity characteristics of the MIP microparticles towards hydrophobic and hydrophilic oligo-peptides. The proposed method appears to be a very promising and efficient technique for separation of proteins.

  13. A combined MOIP-MCDA approach to building and screening atmospheric pollution control strategies in urban regions.

    Science.gov (United States)

    Mavrotas, George; Ziomas, Ioannis C; Diakouaki, Danae

    2006-07-01

    This article presents a methodological approach for the formulation of control strategies capable of reducing atmospheric pollution at the standards set by European legislation. The approach was implemented in the greater area of Thessaloniki and was part of a project aiming at the compliance with air quality standards in five major cities in Greece. The methodological approach comprises two stages: in the first stage, the availability of several measures contributing to a certain extent to reducing atmospheric pollution indicates a combinatorial problem and favors the use of Integer Programming. More specifically, Multiple Objective Integer Programming is used in order to generate alternative efficient combinations of the available policy measures on the basis of two conflicting objectives: public expenditure minimization and social acceptance maximization. In the second stage, these combinations of control measures (i.e., the control strategies) are then comparatively evaluated with respect to a wider set of criteria, using tools from Multiple Criteria Decision Analysis, namely, the well-known PROMETHEE method. The whole procedure is based on the active involvement of local and central authorities in order to incorporate their concerns and preferences, as well as to secure the adoption and implementation of the resulting solution.

  14. A Combined MOIP-MCDA Approach to Building and Screening Atmospheric Pollution Control Strategies in Urban Regions

    Science.gov (United States)

    Mavrotas, George; Ziomas, Ioannis C.; Diakouaki, Danae

    2006-07-01

    This article presents a methodological approach for the formulation of control strategies capable of reducing atmospheric pollution at the standards set by European legislation. The approach was implemented in the greater area of Thessaloniki and was part of a project aiming at the compliance with air quality standards in five major cities in Greece. The methodological approach comprises two stages: in the first stage, the availability of several measures contributing to a certain extent to reducing atmospheric pollution indicates a combinatorial problem and favors the use of Integer Programming. More specifically, Multiple Objective Integer Programming is used in order to generate alternative efficient combinations of the available policy measures on the basis of two conflicting objectives: public expenditure minimization and social acceptance maximization. In the second stage, these combinations of control measures (i.e., the control strategies) are then comparatively evaluated with respect to a wider set of criteria, using tools from Multiple Criteria Decision Analysis, namely, the well-known PROMETHEE method. The whole procedure is based on the active involvement of local and central authorities in order to incorporate their concerns and preferences, as well as to secure the adoption and implementation of the resulting solution.

  15. A combined data mining approach using rough set theory and case-based reasoning in medical datasets

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Rezvan

    2014-06-01

    Full Text Available Case-based reasoning (CBR is the process of solving new cases by retrieving the most relevant ones from an existing knowledge-base. Since, irrelevant or redundant features not only remarkably increase memory requirements but also the time complexity of the case retrieval, reducing the number of dimensions is an issue worth considering. This paper uses rough set theory (RST in order to reduce the number of dimensions in a CBR classifier with the aim of increasing accuracy and efficiency. CBR exploits a distance based co-occurrence of categorical data to measure similarity of cases. This distance is based on the proportional distribution of different categorical values of features. The weight used for a feature is the average of co-occurrence values of the features. The combination of RST and CBR has been applied to real categorical datasets of Wisconsin Breast Cancer, Lymphography, and Primary cancer. The 5-fold cross validation method is used to evaluate the performance of the proposed approach. The results show that this combined approach lowers computational costs and improves performance metrics including accuracy and interpretability compared to other approaches developed in the literature.

  16. A systematic study of genome context methods: calibration, normalization and combination

    Directory of Open Access Journals (Sweden)

    Dale Joseph M

    2010-10-01

    Full Text Available Abstract Background Genome context methods have been introduced in the last decade as automatic methods to predict functional relatedness between genes in a target genome using the patterns of existence and relative locations of the homologs of those genes in a set of reference genomes. Much work has been done in the application of these methods to different bioinformatics tasks, but few papers present a systematic study of the methods and their combination necessary for their optimal use. Results We present a thorough study of the four main families of genome context methods found in the literature: phylogenetic profile, gene fusion, gene cluster, and gene neighbor. We find that for most organisms the gene neighbor method outperforms the phylogenetic profile method by as much as 40% in sensitivity, being competitive with the gene cluster method at low sensitivities. Gene fusion is generally the worst performing of the four methods. A thorough exploration of the parameter space for each method is performed and results across different target organisms are presented. We propose the use of normalization procedures as those used on microarray data for the genome context scores. We show that substantial gains can be achieved from the use of a simple normalization technique. In particular, the sensitivity of the phylogenetic profile method is improved by around 25% after normalization, resulting, to our knowledge, on the best-performing phylogenetic profile system in the literature. Finally, we show results from combining the various genome context methods into a single score. When using a cross-validation procedure to train the combiners, with both original and normalized scores as input, a decision tree combiner results in gains of up to 20% with respect to the gene neighbor method. Overall, this represents a gain of around 15% over what can be considered the state of the art in this area: the four original genome context methods combined using a

  17. Hybrid Modeling and Optimization of Manufacturing Combining Artificial Intelligence and Finite Element Method

    CERN Document Server

    Quiza, Ramón; Davim, J Paulo

    2012-01-01

    Artificial intelligence (AI) techniques and the finite element method (FEM) are both powerful computing tools, which are extensively used for modeling and optimizing manufacturing processes. The combination of these tools has resulted in a new flexible and robust approach as several recent studies have shown. This book aims to review the work already done in this field as well as to expose the new possibilities and foreseen trends. The book is expected to be useful for postgraduate students and researchers, working in the area of modeling and optimization of manufacturing processes.

  18. Review of life-cycle approaches coupled with data envelopment analysis: launching the CFP + DEA method for energy policy making.

    Science.gov (United States)

    Vázquez-Rowe, Ian; Iribarren, Diego

    2015-01-01

    Life-cycle (LC) approaches play a significant role in energy policy making to determine the environmental impacts associated with the choice of energy source. Data envelopment analysis (DEA) can be combined with LC approaches to provide quantitative benchmarks that orientate the performance of energy systems towards environmental sustainability, with different implications depending on the selected LC + DEA method. The present paper examines currently available LC + DEA methods and develops a novel method combining carbon footprinting (CFP) and DEA. Thus, the CFP + DEA method is proposed, a five-step structure including data collection for multiple homogenous entities, calculation of target operating points, evaluation of current and target carbon footprints, and result interpretation. As the current context for energy policy implies an anthropocentric perspective with focus on the global warming impact of energy systems, the CFP + DEA method is foreseen to be the most consistent LC + DEA approach to provide benchmarks for energy policy making. The fact that this method relies on the definition of operating points with optimised resource intensity helps to moderate the concerns about the omission of other environmental impacts. Moreover, the CFP + DEA method benefits from CFP specifications in terms of flexibility, understanding, and reporting.

  19. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    International Nuclear Information System (INIS)

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    A quantitative local computed tomography combined with data-constrained modelling has been developed. The method could improve distinctly the spatial resolution and the composition resolution in a sample larger than the field of view, for quantitative characterization of three-dimensional distributions of material compositions and void. Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials

  20. Synthetic approaches towards new polymer systems by the combination of living carbocationic and anionic polymerizations

    DEFF Research Database (Denmark)

    Feldthusen, Jesper; Ivan, Bela; Muller, Axel. H.E.

    1996-01-01

    Recent efforts to obtain block copolymers by combination of living carbocationic and anionic polymerizations are presented.When tolyl-ended polyisobutylene was used as macroinitiator of anionic polymerization of methacrylate derivatives mixtures of homopolymers and block copolymers were formed due...... to incomplete lithiation of this chain end.In another approach a new functionalization method was developed by end-quenching living polyisobutylene with 1,1-diphenylethylene. After transformation of the groups into 2,2-diphenylvinyl end groups and lithiation polymers were synthesized from protected acrylate...

  1. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Naser Gholi Sarli

    2013-10-01

    Full Text Available Abstract One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year organic patterns of evolution great poets and writers literary emblems and evaluations of every period events, concepts and periods of general or political history analogy of literary history and history of ideas or history of arts approaches and styles of language dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change. Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  2. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Dr. N. Gh. Sarli

    Full Text Available One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year; organic patterns of evolution; great poets and writers; literary emblems and evaluations of every period; events, concepts and periods of general or political history; analogy of literary history and history of ideas or history of arts; approaches and styles of language; dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change.Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  3. Approaches and Methods of Periodization in Literary History

    Directory of Open Access Journals (Sweden)

    Naser Gholi Sarli

    2013-11-01

    Full Text Available Abstract One of the most fundamental acts of historiography is to classify historical information in diachronic axis. The method of this classification or periodization shows the theoretical approach of the historian and determines the structure and the form of his history. Because of multiple criteria of analysis and various literary genres, periodization in literary history is more complicated than that of general history. We can distinguish two approaches in periodization of literary history, although these can be used together: extrinsic or social-cultural approach (based on criteria extrinsic to literature and intrinsic or formalist approach (based on criteria intrinsic to literature. Then periodization in literary history can be formulated in different methods and may be based upon various criteria: chronological such as century, decade and year organic patterns of evolution great poets and writers literary emblems and evaluations of every period events, concepts and periods of general or political history analogy of literary history and history of ideas or history of arts approaches and styles of language dominant literary norms. These methods actually are used together and everyone has adequacy in special kind of literary history. In periodization of Persian contemporary literature, some methods and models current in periodization of poetry have been applied identically to periodization of prose. Periodization based upon century, decade and year is the simplest and most mechanical method but sometimes certain centuries in some countries have symbolic and stylistic meaning, and decades were used often for subdivisions of literary history, especially nowadays with fast rhythm of literary change. Periodization according to organic patterns of evolution equates the changes of literary history with the life phases of an organism, and offers an account of birth, mature and death (and sometimes re-birth of literary genres, but this method have

  4. A Combined Raindrop Aggregate Destruction Test-Settling Tube (RADT-ST Approach to Identify the Settling Velocity of Sediment

    Directory of Open Access Journals (Sweden)

    Liangang Xiao

    2015-10-01

    Full Text Available The use of sediment settling velocity based on mineral grain size distribution in erosion models ignores the effects of aggregation on settling velocity. The alternative approach, wet-sieved aggregate size distribution, on the other hand, cannot represent all destructive processes that eroded soils may experience under impacting raindrops. Therefore, without considering raindrop impact, both methods may lead to biased predictions of the redistribution of sediment and associated substances across landscapes. Rainfall simulation is an effective way to simulate natural raindrop impact under controlled laboratory conditions. However, very few methods have been developed to integrate rainfall simulation with the settling velocity of eroded sediment. This study aims to develop a new proxy, based on rainfall simulation, in order to identify the actual settling velocity distribution of aggregated sediment. A combined Raindrop Aggregate Destruction Test-Settling Tube (RADT-ST approach was developed to (1 simulate aggregate destruction under a series of simulated rainfalls; and (2 measure the actual settling velocity distribution of destroyed aggregates. Mean Weight Settling Velocity (MWSV of aggregates was used to investigate settling behaviors of different soils as rainfall kinetic energy increased. The results show the settling velocity of silt-rich raindrop impacted aggregates is likely to be underestimated by at least six times if based on mineral grain size distribution. The RADT-ST designed in this study effectively captures the effects of aggregation on settling behavior. The settling velocity distribution should be regarded as an evolving, rather than steady state parameter during erosion events. The combined RADT-ST approach is able to generate the quasi-natural sediment under controlled simulated rainfall conditions and is adequately sensitive to measure actual settling velocities of differently aggregated soils. This combined approach provides

  5. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Li; He, Ya-Ling [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Kang, Qinjun [Computational Earth Science Group (EES-16), Los Alamos National Laboratory, Los Alamos, NM (United States); Tao, Wen-Quan, E-mail: wqtao@mail.xjtu.edu.cn [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of which obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.

  6. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    Science.gov (United States)

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  7. Combination of statistical and physically based methods to assess shallow slide susceptibility at the basin scale

    Science.gov (United States)

    Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel

    2017-07-01

    Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.

  8. Methodology for combining dynamic responses

    International Nuclear Information System (INIS)

    Cudlin, R.; Hosford, S.; Mattu, R.; Wichman, K.

    1978-09-01

    The NRC has historically required that the structural/mechanical responses due to various accident loads and loads caused by natural phenomena, (such as earthquakes) be combined when analyzing structures, systems, and components important to safety. Several approaches to account for the potential interaction of loads resulting from accidents and natural phenomena have been used. One approach, the so-called absolute or linear summation (ABS) method, linearly adds the peak structural responses due to the individual dynamic loads. In general, the ABS method has also reflected the staff's conservative preference for the combination of dynamic load responses. A second approach, referred to as SRSS, yields a combined response equal to the square root of the sum of the squares of the peak responses due to the individual dynamic loads. The lack of a physical relationship between some of the loads has raised questions as to the proper methodology to be used in the design of nuclear power plants. An NRR Working Group was constituted to examine load combination methodologies and to develop a recommendation concerning criteria or conditions for their application. Evaluations of and recommendations on the use of the ABS and SRSS methods are provided in the report

  9. Endoscopic combined “transseptal/transnasal” approach for pituitary adenoma: reconstruction of skull base using pedicled nasoseptal flap in 91 consecutive cases

    Directory of Open Access Journals (Sweden)

    Yasunori Fujimoto

    2015-07-01

    Full Text Available Objective The purpose of this study was to describe the endoscopic combined “transseptal/transnasal” approach with a pedicled nasoseptal flap for pituitary adenoma and skull base reconstruction, especially with respect to cerebrospinal fluid (CSF fistula.Method Ninety-one consecutive patients with pituitary adenomas were retrospectively reviewed. All patients underwent the endoscopic combined “transseptal/transnasal” approach by the single team including the otorhinolaryngologists and neurosurgeons. Postoperative complications related to the flap were analyzed.Results Intra- and postoperative CSF fistulae were observed in 36 (40% and 4 (4.4% patients, respectively. Among the 4 patients, lumbar drainage and bed rest healed the CSF fistula in 3 patients and reoperation for revision was necessary in one patient. Other flap-related complications included nasal bleeding in 3 patients (3.3%.Conclusion The endoscopic combined “transseptal/transnasal” approach is most suitable for a two-surgeon technique and a pedicled nasoseptal flap is a reliable technique for preventing postoperative CSF fistula in pituitary surgery.

  10. Information and treatment of unknown correlations in the combination of measurements using the BLUE method

    CERN Document Server

    Valassi, A

    2014-01-01

    We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one contribution to information comes from the collective interplay of the measurements through their correlations and that this contribution cannot be attributed to any of the individual measurements alone. We show that negative coefficients in the BLUE weighted average invariably indicate the presence of a regime of high correlations, where the effect of further increasing some of these correlations is that of reducing the error on the combined estimate. In these regimes, we stress that the correlations provided as input to BLUE combinations need to be assessed with extreme ca...

  11. Complete remission of recalcitrant genital warts with a combination approach of surgical debulking and oral isotretinoin in a patient with systemic lupus erythematosus.

    Science.gov (United States)

    Yew, Yik Weng; Pan, Jiun Yit

    2014-01-01

    Genital warts in immunocompromised patients can be extensive and recalcitrant to treatment. We report a case of recalcitrant genital warts in a female patient with systemic lupus erythematosus (SLE), who achieved complete remission with a combination approach of surgical debulking and oral isotretinoin at an initial dose of 20 mg/day with a gradual taper of dose over 8 months. She had previously been treated with a combination of topical imiquimod cream and regular fortnightly liquid nitrogen. Although there was partial response, there was no complete clearance. Her condition worsened after topical imiquimod cream was stopped because of her pregnancy. She underwent a combination approach of surgical debulking and oral isotretinoin after her delivery and achieved full clearance for more than 2 years duration. Oral isotretinoin, especially in the treatment of recalcitrant genital warts, is a valuable and feasible option when other more conventional treatment methods have failed or are not possible. It can be used alone or in combination with other local or physical treatment methods. © 2013 Wiley Periodicals, Inc.

  12. A rapid method combining Golgi and Nissl staining to study neuronal morphology and cytoarchitecture.

    Science.gov (United States)

    Pilati, Nadia; Barker, Matthew; Panteleimonitis, Sofoklis; Donga, Revers; Hamann, Martine

    2008-06-01

    The Golgi silver impregnation technique gives detailed information on neuronal morphology of the few neurons it labels, whereas the majority remain unstained. In contrast, the Nissl staining technique allows for consistent labeling of the whole neuronal population but gives very limited information on neuronal morphology. Most studies characterizing neuronal cell types in the context of their distribution within the tissue slice tend to use the Golgi silver impregnation technique for neuronal morphology followed by deimpregnation as a prerequisite for showing that neuron's histological location by subsequent Nissl staining. Here, we describe a rapid method combining Golgi silver impregnation with cresyl violet staining that provides a useful and simple approach to combining cellular morphology with cytoarchitecture without the need for deimpregnating the tissue. Our method allowed us to identify neurons of the facial nucleus and the supratrigeminal nucleus, as well as assessing cellular distribution within layers of the dorsal cochlear nucleus. With this method, we also have been able to directly compare morphological characteristics of neuronal somata at the dorsal cochlear nucleus when labeled with cresyl violet with those obtained with the Golgi method, and we found that cresyl violet-labeled cell bodies appear smaller at high cellular densities. Our observation suggests that cresyl violet staining is inadequate to quantify differences in soma sizes.

  13. "Combining equity and utilitarianism"-additional insights into a novel approach

    NARCIS (Netherlands)

    Lemmen-Gerdessen, van Joke; Kanellopoulos, Argyris; Claassen, Frits

    2018-01-01

    Recently, a novel approach (to be referred to as CEU) was introduced for the frequently arising problem of combining the conflicting criteria of equity and utilitarianism. This paper provides additional insights into CEU and assesses its added value for practice by comparing it with a commonly used

  14. Can a combination of the conformal thin-sandwich and puncture methods yield binary black hole solutions in quasiequilibrium?

    International Nuclear Information System (INIS)

    Hannam, Mark D.; Evans, Charles R.; Cook, Gregory B.; Baumgarte, Thomas W.

    2003-01-01

    We consider combining two important methods for constructing quasiequilibrium initial data for binary black holes: the conformal thin-sandwich formalism and the puncture method. The former seeks to enforce stationarity in the conformal three-metric and the latter attempts to avoid internal boundaries, like minimal surfaces or apparent horizons. We show that these two methods make partially conflicting requirements on the boundary conditions that determine the time slices. In particular, it does not seem possible to construct slices that are quasistationary and that avoid physical singularities while simultaneously are connected by an everywhere positive lapse function, a condition which must be obtained if internal boundaries are to be avoided. Some relaxation of these conflicting requirements may yield a soluble system, but some of the advantages that were sought in combining these approaches will be lost

  15. Optimal control of open quantum systems: a combined surrogate hamiltonian optimal control theory approach applied to photochemistry on surfaces.

    Science.gov (United States)

    Asplund, Erik; Klüner, Thorsten

    2012-03-28

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ℏ = m(e) = e = a(0) = 1, have been used unless otherwise stated.

  16. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews.

    Science.gov (United States)

    Pluye, Pierre; Hong, Quan Nha

    2014-01-01

    This article provides an overview of mixed methods research and mixed studies reviews. These two approaches are used to combine the strengths of quantitative and qualitative methods and to compensate for their respective limitations. This article is structured in three main parts. First, the epistemological background for mixed methods will be presented. Afterward, we present the main types of mixed methods research designs and techniques as well as guidance for planning, conducting, and appraising mixed methods research. In the last part, we describe the main types of mixed studies reviews and provide a tool kit and examples. Future research needs to offer guidance for assessing mixed methods research and reporting mixed studies reviews, among other challenges.

  17. An approach to combine radar and gauge based rainfall data under consideration of their qualities in low mountain ranges of Saxony

    Directory of Open Access Journals (Sweden)

    N. Jatho

    2010-03-01

    Full Text Available An approach to combine gauge and radar data and additional quality information is presented. The development was focused on the improvement of the diagnostic for temporal (one hour and spatial (1×1 km2 highly resolved precipitation data. The method is embedded in an online tool and was applied to the target area Saxony, Germany. The aim of the tool is to provide accurate spatial rainfall estimates. The results can be used for rainfall run-off modelling, e.g. in a flood management system.

    Quality information allows a better assessment of the input data and the resulting precipitation field. They are stored in corresponding fields and represent the static and dynamic uncertainties of radar and gauge data. Objective combination of various precipitation and quality fields is realised using a cost function.

    The findings of cross validation reveal that the proposed combination method merged the benefits and disadvantages of interpolated gauge and radar data and leads to mean estimates. The sampling point validation implies that the presented method slightly overestimated the areal rain as well as the high rain intensities in case of convective and advective events, while the results of pure interpolation method performed better. In general, the use of presented cost function avoids false rainfall amount in areas of low input data quality and improves the reliability in areas of high data quality. It is obvious that the combined product includes the small-scale variability of radar, which is seen as the important benefit of the presented combination approach. Local improvements of the final rain field are possible due to consideration of gauges that were not used for radar calibration, e.g. in topographic distinct regions.

  18. Analysis of a combined mixed finite element and discontinuous Galerkin method for incompressible two-phase flow in porous media

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2013-01-01

    We analyze a combined method consisting of the mixed finite element method for pressure equation and the discontinuous Galerkin method for saturation equation for the coupled system of incompressible two-phase flow in porous media. The existence and uniqueness of numerical solutions are established under proper conditions by using a constructive approach. Optimal error estimates in L2(H1) for saturation and in L∞(H(div)) for velocity are derived. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Analysis of a combined mixed finite element and discontinuous Galerkin method for incompressible two-phase flow in porous media

    KAUST Repository

    Kou, Jisheng

    2013-06-20

    We analyze a combined method consisting of the mixed finite element method for pressure equation and the discontinuous Galerkin method for saturation equation for the coupled system of incompressible two-phase flow in porous media. The existence and uniqueness of numerical solutions are established under proper conditions by using a constructive approach. Optimal error estimates in L2(H1) for saturation and in L∞(H(div)) for velocity are derived. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Translating Basic Behavioral and Social Science Research to Clinical Application: The EVOLVE Mixed Methods Approach

    Science.gov (United States)

    Peterson, Janey C.; Czajkowski, Susan; Charlson, Mary E.; Link, Alissa R.; Wells, Martin T.; Isen, Alice M.; Mancuso, Carol A.; Allegrante, John P.; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B.

    2013-01-01

    Objective: To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in 3 high-risk clinical populations. Our theoretically derived intervention comprised a combination of positive affect and self-affirmation (PA/SA), which we applied to 3 clinical chronic disease…

  1. Management of advanced intracranial intradural juvenile nasopharyngeal angiofibroma: combined single-stage rhinosurgical and neurosurgical approach.

    Science.gov (United States)

    Naraghi, Mohsen; Saberi, Hooshang; Mirmohseni, Atefeh Sadat; Nikdad, Mohammad Sadegh; Afarideh, Mohsen

    2015-07-01

    Although intracranial extension of juvenile nasopharyngeal angiofibroma (JNA) occurs commonly, intradural penetration is extremely rare. Management of such tumors is a challenging issue in skull-base surgery, necessitating their removal via combined approaches. In this work, we share our experience in management of extensive intradural JNA. In a university hospital-based setting of 2 tertiary care academic centers, retrospective chart of 6 male patients (5 between 15 and 19 years old) was reviewed. Patients presented chiefly with nasal obstruction, epistaxis, and proptosis. One of them was an aggressive recurrent tumor in a 32-year-old patient. All cases underwent combined transnasal, transmaxillary, and craniotomy approaches assisted by the use of image-guided endoscopic surgery, with craniotomy preceding the rhinosurgical approach in 3 cases. Adding a transcranial approach to the transnasal and transmaxillary endoscopic approaches provided 2-sided exposure and appreciated access to the huge intradural JNAs. One postoperative cerebrospinal fluid leak and 1 postoperative recurrence at the site of infratemporal fossa were treated successfully. Otherwise, the course was uneventful in the remaining cases. Management of intracranial intradural JNA requires a multidisciplinary approach of combined open and endoscopic-assisted rhinosurgery and neurosurgery, because of greater risk for complications during the dissection. Carotid rupture and brain damage remain 2 catastrophic complications that should always be kept in mind. A combined rhinosurgical and neurosurgical approach also has the advantage of very modest cosmetic complications. © 2015 ARS-AAOA, LLC.

  2. A combined emitter threat assessment method based on ICW-RCM

    Science.gov (United States)

    Zhang, Ying; Wang, Hongwei; Guo, Xiaotao; Wang, Yubing

    2017-08-01

    Considering that the tradition al emitter threat assessment methods are difficult to intuitively reflect the degree of target threaten and the deficiency of real-time and complexity, on the basis of radar chart method(RCM), an algorithm of emitter combined threat assessment based on ICW-RCM (improved combination weighting method, ICW) is proposed. The coarse sorting is integrated with fine sorting in emitter combined threat assessment, sequencing the emitter threat level roughly accordance to radar operation mode, and reducing task priority of the low-threat emitter; On the basis of ICW-RCM, sequencing the same radar operation mode emitter roughly, finally, obtain the results of emitter threat assessment through coarse and fine sorting. Simulation analyses show the correctness and effectiveness of this algorithm. Comparing with classical method of emitter threat assessment based on CW-RCM, the algorithm is visual in image and can work quickly with lower complexity.

  3. Maintenance Approaches for Different Production Methods

    Directory of Open Access Journals (Sweden)

    Mungani, Dzivhuluwani Simon

    2013-11-01

    Full Text Available Various production methods are used in industry to manufacture or produce a variety of products needed by industry and consumers. The nature of a product determines which production method is most suitable or cost-effective. A continuous process is typically used to produce large volumes of liquids or gases. Batch processing is often used for small volumes, such as pharmaceutical products. This paper discusses a research project to determine the relationship between maintenance approaches and production methods. A survey was done to determine to what extent three maintenance approaches reliability-centred maintenance (RCM, total productive maintenance (TPM, and business-centred maintenance (BCM are used for three different processing methods (continuous process, batch process, and a production line method.

  4. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  5. Identifying plant cell-surface receptors: combining 'classical' techniques with novel methods.

    Science.gov (United States)

    Uebler, Susanne; Dresselhaus, Thomas

    2014-04-01

    Cell-cell communication during development and reproduction in plants depends largely on a few phytohormones and many diverse classes of polymorphic secreted peptides. The peptide ligands are bound at the cell surface of target cells by their membranous interaction partners representing, in most cases, either receptor-like kinases or ion channels. Although knowledge of both the extracellular ligand and its corresponding receptor(s) is necessary to describe the downstream signalling pathway(s), to date only a few ligand-receptor pairs have been identified. Several methods, such as affinity purification and yeast two-hybrid screens, have been used very successfully to elucidate interactions between soluble proteins, but most of these methods cannot be applied to membranous proteins. Experimental obstacles such as low concentration and poor solubility of membrane receptors, as well as instable transient interactions, often hamper the use of these 'classical' approaches. However, over the last few years, a lot of progress has been made to overcome these problems by combining classical techniques with new methodologies. In the present article, we review the most promising recent methods in identifying cell-surface receptor interactions, with an emphasis on success stories outside the field of plant research.

  6. Loss distribution approach for operational risk capital modelling under Basel II: Combining different data sources for risk estimation

    Directory of Open Access Journals (Sweden)

    Pavel V. Shevchenko

    2013-07-01

    Full Text Available The management of operational risk in the banking industry has undergone significant changes over the last decade due to substantial changes in operational risk environment. Globalization, deregulation, the use of complex financial products and changes in information technology have resulted in exposure to new risks very different from market and credit risks. In response, Basel Committee for banking Supervision has developed a regulatory framework, referred to as Basel II, that introduced operational risk category and corresponding capital requirements. Over the past five years, major banks in most parts of the world have received accreditation under the Basel II Advanced Measurement Approach (AMA by adopting the loss distribution approach (LDA despite there being a number of unresolved methodological challenges in its implementation. Different approaches and methods are still under hot debate. In this paper, we review methods proposed in the literature for combining different data sources (internal data, external data and scenario analysis which is one of the regulatory requirement for AMA.

  7. Angular approach combined to mechanical model for tool breakage detection by eddy current sensors

    Science.gov (United States)

    Ritou, M.; Garnier, S.; Furet, B.; Hascoet, J. Y.

    2014-02-01

    The paper presents a new complete approach for Tool Condition Monitoring (TCM) in milling. The aim is the early detection of small damages so that catastrophic tool failures are prevented. A versatile in-process monitoring system is introduced for reliability concerns. The tool condition is determined by estimates of the radial eccentricity of the teeth. An adequate criterion is proposed combining mechanical model of milling and angular approach.Then, a new solution is proposed for the estimate of cutting force using eddy current sensors implemented close to spindle nose. Signals are analysed in the angular domain, notably by synchronous averaging technique. Phase shifts induced by changes of machining direction are compensated. Results are compared with cutting forces measured with a dynamometer table.The proposed method is implemented in an industrial case of pocket machining operation. One of the cutting edges has been slightly damaged during the machining, as shown by a direct measurement of the tool. A control chart is established with the estimates of cutter eccentricity obtained during the machining from the eddy current sensors signals. Efficiency and reliability of the method is demonstrated by a successful detection of the damage.

  8. Bending moment evaluation of a long specimen using a radial speckle pattern interferometer in combination with relaxation methods

    Science.gov (United States)

    Pacheco, Anderson; Fontana, Filipe; Viotti, Matias R.; Veiga, Celso L. N.; Lothhammer, Lívia R.; Albertazzi G., Armando, Jr.

    2015-08-01

    The authors developed an achromatic speckle pattern interferometer able to measure in-plane displacements in polar coordinates. It has been used to measure combined stresses resulting from the superposition of mechanical loading and residual stresses. Relaxation methods have been applied to produce on the surface of the specimen a displacement field that can be used to determine the amount of combined stresses. Two relaxation methods are explored in this work: blind hole-drilling and indentation. The first one results from a blind hole drilled with a high-speed drilling unit in the area of interest. The measured displacement data is fitted in an appropriate model to quantify the stress level using an indirect approach based on a set of finite element coefficients. The second approach uses indentation, where a hard spherical tip is firmly pressed against the surface to be measured with a predetermined indentation load. A plastic flow occurs around the indentation mark producing a radial in-plane displacement field that is related to the amount of combined stresses. Also in this case, displacements are measured by the radial interferometer and used to determine the stresses by least square fitting it to a displacement field determined by calibration. Both approaches are used to quantify the amount of bending stresses and moment in eight sections of a 12 m long 200 mm diameter steel pipe submitted to a known transverse loading. Reference values of bending stresses are also determined by strain gauges. The comparison between the four results is discussed in the paper.

  9. Predicting protein complexes using a supervised learning method combined with local structural information.

    Science.gov (United States)

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  10. An isoeffect approach to the study of combined effects of mixed radiations--the nonparametric analysis of in vivo data

    International Nuclear Information System (INIS)

    Lam, G.K.

    1989-01-01

    The combined effects of mixed radiations can be examined using a system of simple isoeffect relations which are derived from a recent analysis of in vitro results obtained for a variety of radiation mixtures. Similar isoeffect analysis methods have been used for over two decades in studies of the combined action of toxic agents such as drugs and antibiotics. Because of the isoeffect approach, the method is particularly useful for the analysis of ordinal data for which conventional models that are based on parametric dose-effect relations may not be suitable. This is illustrated by applying the method to the analysis of a set of recently published in vivo data using the mouse foot skin reaction system for mixtures of neutrons and X rays. The good agreement between this method and the ordinal data also helps to provide further experimental support for the existence of a class of radiobiological data for which the simple isoeffect relations are valid

  11. Combining qualitative and quantitative research approaches in understanding pain

    DEFF Research Database (Denmark)

    Moore, R.

    1996-01-01

    findings. Furthermore, with specific scientific assumptions, combining methods can aid in estimating minimum sample size required for theoretical generalizations from even a qualitative sample. This is based on measures of how accurately subjects describe a given social phenomenon and degree of agreement......There are many research issues about validity and especially reliability in regards to qualitative research results. Generalizability is brought into question to any population base from which a relatively small number of informants are drawn. Sensitivity to new discoveries is an advantage...... of qualitative research while the advantage of quantified survey data is their reliability. This paper argues for combining qualitative and quantitative methods to improve concurrent validity of results by triangulating interviews, observations or focus group data with short surveys for validation of main...

  12. Using an innovative combination of quality-by-design and green analytical chemistry approaches for the development of a stability indicating UHPLC method in pharmaceutical products.

    Science.gov (United States)

    Boussès, Christine; Ferey, Ludivine; Vedrines, Elodie; Gaudin, Karen

    2015-11-10

    An innovative combination of green chemistry and quality by design (QbD) approach is presented through the development of an UHPLC method for the analysis of the main degradation products of dextromethorphan hydrobromide. QbD strategy was integrated to the field of green analytical chemistry to improve method understanding while assuring quality and minimizing environmental impacts, and analyst exposure. This analytical method was thoroughly evaluated by applying risk assessment and multivariate analysis tools. After a scouting phase aimed at selecting a suitable stationary phase and an organic solvent in accordance with green chemistry principles, quality risk assessment tools were applied to determine the critical process parameters (CPPs). The effects of the CPPs on critical quality attributes (CQAs), i.e., resolutions, efficiencies, and solvent consumption were further evaluated by means of a screening design. A response surface methodology was then carried out to model CQAs as function of the selected CPPs and the optimal separation conditions were determined through a desirability analysis. Resulting contour plots enabled to establish the design space (DS) (method operable design region) where all CQAs fulfilled the requirements. An experimental validation of the DS proved that quality within the DS was guaranteed; therefore no more robustness study was required before the validation. Finally, this UHPLC method was validated using the concept of total error and was used to analyze a pharmaceutical drug product. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. QUANTUM INSPIRED PARTICLE SWARM COMBINED WITH LIN-KERNIGHAN-HELSGAUN METHOD TO THE TRAVELING SALESMAN PROBLEM

    Directory of Open Access Journals (Sweden)

    Bruno Avila Leal de Meirelles Herrera

    2015-12-01

    Full Text Available ABSTRACT The Traveling Salesman Problem (TSP is one of the most well-known and studied problems of Operations Research field, more specifically, in the Combinatorial Optimization field. As the TSP is a NP (Non-Deterministic Polynomial time-hard problem, there are several heuristic methods which have been proposed for the past decades in the attempt to solve it the best possible way. The aim of this work is to introduce and to evaluate the performance of some approaches for achieving optimal solution considering some symmetrical and asymmetrical TSP instances, which were taken from the Traveling Salesman Problem Library (TSPLIB. The analyzed approaches were divided into three methods: (i Lin-Kernighan-Helsgaun (LKH algorithm; (ii LKH with initial tour based on uniform distribution; and (iii an hybrid proposal combining Particle Swarm Optimization (PSO with quantum inspired behavior and LKH for local search procedure. The tested algorithms presented promising results in terms of computational cost and solution quality.

  14. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Yaozhong Luo

    2017-01-01

    Full Text Available Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%, the second highest TPVF (85.34%, and the second lowest FPVF (4.48%.

  15. Dural opening/removal for combined petrosal approach: technical note.

    Science.gov (United States)

    Terasaka, Shunsuke; Asaoka, Katsuyuki; Kobayashi, Hiroyuki; Sugiyama, Taku; Yamaguchi, Shigeru

    2011-03-01

    Detailed descriptions of stepwise dural opening/removal for combined petrosal approach are presented. Following maximum bone work, the first dural incision was made along the undersurface of the temporal lobe parallel to the superior petrosal sinus. Posterior extension of the dural incision was made in a curved fashion, keeping away from the transverse-sigmoid junction and taking care to preserve the vein of Labbé. A second incision was made perpendicular to the first incision. After sectioning the superior petrosal sinus around the porus trigeminus, the incision was extended toward the posterior fossa dura in the middle fossa region. The tentorium was incised toward the incisura at a point just posterior to the entrance of the trochlear nerve. A third incision was made longitudinally between the superior petrosal sinus and the jugular bulb. A final incision was initiated perpendicular to the third incision in the presigmoid region and extended parallel to the superior petrosal sinus connecting the second incision. The dural complex consisting of the temporal lobe dura, the posterior fossa dura, and the freed tentorium could then be removed. In addition to extensive bone resection, our strategic cranial base dural opening/removal can yield true advantages for the combined petrosal approach.

  16. A combination Kalman filter approach for State of Charge estimation of lithium-ion battery considering model uncertainty

    International Nuclear Information System (INIS)

    Li, Yanwen; Wang, Chao; Gong, Jinfeng

    2016-01-01

    An accurate battery State of Charge estimation plays an important role in battery electric vehicles. This paper makes two contributions to the existing literature. (1) A recursive least squares method with fuzzy adaptive forgetting factor has been presented to update the model parameters close to the real value more quickly. (2) The statistical information of the innovation sequence obeying chi-square distribution has been introduced to identify model uncertainty, and a novel combination algorithm of strong tracking unscented Kalman filter and adaptive unscented Kalman filter has been developed to estimate SOC (State of Charge). Experimental results indicate that the novel algorithm has a good performance in estimating the battery SOC against initial SOC errors and voltage sensor drift. A comparison with the unscented Kalman filter-based algorithms and adaptive unscented Kalman filter-based algorithms shows that the proposed SOC estimation method has better accuracy, robustness and convergence behavior. - Highlights: • Recursive least squares method with fuzzy adaptive forgetting factor is presented. • The innovation obeying chi-square distribution is used to identify uncertainty. • A combination Karman filter approach for State of Charge estimation is presented. • The performance of the proposed method is verified by comparison results.

  17. 5-aminolevulinic acid and neuronavigation in high-grade glioma surgery: results of a combined approach.

    Science.gov (United States)

    Panciani, Pier Paolo; Fontanella, Marco; Garbossa, Diego; Agnoletti, Alessandro; Ducati, Alessandro; Lanotte, Michele

    2012-02-01

    In high-grade glioma surgery, several techniques are used to achieve the maximum cytoreductive treatment preserving neurological functions. However, the effectiveness of all the methods used alone is reduced by specific limitations of each. We assessed the reliability of a multimodal strategy based on 5-aminolevulinic acid (5-ALA) and neuronavigation. We prospectively studied 18 patients with suspected, non eloquent-area malignant gliomas amenable for complete resection. Conventional illumination was used until the excision appeared complete. The cavity was then systematically inspected in violet-blue light to identify any residual tumour. Multiple biopsies of both fluorescent and non-fluorescent tissue were performed in all cases. Each specimen was labelled according to the sampling location (inside or outside the boundary set by the neuronavigator). The samples were analysed by a neuropathologist blinded to the intraoperative classification. We reviewed the results of both methods, either singly or in combination. Individual analysis showed higher 5-ALA reliability compared to neuronavigation. However, several false-negative fluorescent specimens were detected. With the combined use of fluorescence and neuroimaging, only 1 sample (negative for both 5-ALA and navigation) was tumoral tissue. In our experience, the combined approach showed the best sensitivity and it is recommended in cases of lesions involving non-eloquent areas. Copyright © 2011 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.

  18. Monitoring Mining Subsidence Using A Combination of Phase-Stacking and Offset-Tracking Methods

    Directory of Open Access Journals (Sweden)

    Hongdong Fan

    2015-07-01

    Full Text Available An approach to study the mechanism of mining-induced subsidence, using a combination of phase-stacking and sub-pixel offset-tracking methods, is reported. In this method, land subsidence with a small deformation gradient was calculated using time-series differential interferometric synthetic aperture radar (D-InSAR data, whereas areas with greater subsidence were calculated by a sub-pixel offset-tracking method. With this approach, time-series data for mining subsidence were derived in Yulin area using 11 TerraSAR-X (TSX scenes from 13 December 2012 to 2 April 2013. The maximum mining subsidence and velocity values were 4.478 m and 40 mm/day, respectively, which were beyond the monitoring capabilities of D-InSAR and advanced InSAR. The results were compared with the GPS field survey data, and the root mean square errors (RMSE of the results in the strike and dip directions were 0.16 m and 0.11 m, respectively. Four important results were obtained from the time-series subsidence in this mining area: (1 the mining-induced subsidence entered the residual deformation stage within about 44 days; (2 the advance angle of influence changed from 75.6° to 80.7°; (3 the prediction parameters of mining subsidence; (4 three-dimensional deformation. This method could be used to predict the occurrence of mining accidents and to help in the restoration of the ecological environment after mining activities have ended.

  19. Quantification of endogenous metabolites by the postcolumn infused-internal standard method combined with matrix normalization factor in liquid chromatography-electrospray ionization tandem mass spectrometry.

    Science.gov (United States)

    Liao, Hsiao-Wei; Chen, Guan-Yuan; Wu, Ming-Shiang; Liao, Wei-Chih; Tsai, I-Lin; Kuo, Ching-Hua

    2015-01-02

    Quantification of endogenous metabolites has enabled the discovery of biomarkers for diagnosis and provided for an understanding of disease etiology. The standard addition and stable isotope labeled-internal standard (SIL-IS) methods are currently the most widely used approaches to quantifying endogenous metabolites, but both have some limitations for clinical measurement. In this study, we developed a new approach for endogenous metabolite quantification by the postcolumn infused-internal standard (PCI-IS) method combined with the matrix normalization factor (MNF) method. MNF was used to correct the difference in MEs between standard solution and biofluids, and PCI-IS additionally tailored the correction of the MEs for individual samples. Androstenedione and testosterone were selected as test articles to verify this new approach to quantifying metabolites in plasma. The repeatability (n=4 runs) and intermediate precision (n=3 days) in terms of the peak area of androstenedione and testosterone at all tested concentrations were all less than 11% relative standard deviation (RSD). The accuracy test revealed that the recoveries were between 95.72% and 113.46%. The concentrations of androstenedione and testosterone in fifty plasma samples obtained from healthy volunteers were quantified by the PCI-IS combined with the MNF method, and the quantification results were compared with the results of the SIL-IS method. The Pearson correlation test showed that the correlation coefficient was 0.98 for both androstenedione and testosterone. We demonstrated that the PCI-IS combined with the MNF method is an effective and accurate method for quantifying endogenous metabolites. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Variants of the Borda count method for combining ranked classifier hypotheses

    NARCIS (Netherlands)

    van Erp, Merijn; Schomaker, Lambert; Schomaker, Lambert; Vuurpijl, Louis

    2000-01-01

    The Borda count is a simple yet effective method of combining rankings. In pattern recognition, classifiers are often able to return a ranked set of results. Several experiments have been conducted to test the ability of the Borda count and two variant methods to combine these ranked classifier

  1. Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods

    Science.gov (United States)

    2010-01-01

    Background Schools are key settings for health promotion (HP) but the development of suitable approaches for evaluating HP in schools is still a major topic of discussion. This article presents a research protocol of a program developed to evaluate HP. After reviewing HP evaluation issues, the various possible approaches are analyzed and the importance of a realistic evaluation framework and a mixed methods (MM) design are demonstrated. Methods/Design The design is based on a systemic approach to evaluation, taking into account the mechanisms, context and outcomes, as defined in realistic evaluation, adjusted to our own French context using an MM approach. The characteristics of the design are illustrated through the evaluation of a nationwide HP program in French primary schools designed to enhance children's social, emotional and physical health by improving teachers' HP practices and promoting a healthy school environment. An embedded MM design is used in which a qualitative data set plays a supportive, secondary role in a study based primarily on a different quantitative data set. The way the qualitative and quantitative approaches are combined through the entire evaluation framework is detailed. Discussion This study is a contribution towards the development of suitable approaches for evaluating HP programs in schools. The systemic approach of the evaluation carried out in this research is appropriate since it takes account of the limitations of traditional evaluation approaches and considers suggestions made by the HP research community. PMID:20109202

  2. A Quantum Hybrid PSO Combined with Fuzzy k-NN Approach to Feature Selection and Cell Classification in Cervical Cancer Detection

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2017-12-01

    Full Text Available A quantum hybrid (QH intelligent approach that blends the adaptive search capability of the quantum-behaved particle swarm optimisation (QPSO method with the intuitionistic rationality of traditional fuzzy k-nearest neighbours (Fuzzy k-NN algorithm (known simply as the Q-Fuzzy approach is proposed for efficient feature selection and classification of cells in cervical smeared (CS images. From an initial multitude of 17 features describing the geometry, colour, and texture of the CS images, the QPSO stage of our proposed technique is used to select the best subset features (i.e., global best particles that represent a pruned down collection of seven features. Using a dataset of almost 1000 images, performance evaluation of our proposed Q-Fuzzy approach assesses the impact of our feature selection on classification accuracy by way of three experimental scenarios that are compared alongside two other approaches: the All-features (i.e., classification without prior feature selection and another hybrid technique combining the standard PSO algorithm with the Fuzzy k-NN technique (P-Fuzzy approach. In the first and second scenarios, we further divided the assessment criteria in terms of classification accuracy based on the choice of best features and those in terms of the different categories of the cervical cells. In the third scenario, we introduced new QH hybrid techniques, i.e., QPSO combined with other supervised learning methods, and compared the classification accuracy alongside our proposed Q-Fuzzy approach. Furthermore, we employed statistical approaches to establish qualitative agreement with regards to the feature selection in the experimental scenarios 1 and 3. The synergy between the QPSO and Fuzzy k-NN in the proposed Q-Fuzzy approach improves classification accuracy as manifest in the reduction in number cell features, which is crucial for effective cervical cancer detection and diagnosis.

  3. Direct torque control method applied to the WECS based on the PMSG and controlled with backstepping approach

    Science.gov (United States)

    Errami, Youssef; Obbadi, Abdellatif; Sahnoun, Smail; Ouassaid, Mohammed; Maaroufi, Mohamed

    2018-05-01

    This paper proposes a Direct Torque Control (DTC) method for Wind Power System (WPS) based Permanent Magnet Synchronous Generator (PMSG) and Backstepping approach. In this work, generator side and grid-side converter with filter are used as the interface between the wind turbine and grid. Backstepping approach demonstrates great performance in complicated nonlinear systems control such as WPS. So, the control method combines the DTC to achieve Maximum Power Point Tracking (MPPT) and Backstepping approach to sustain the DC-bus voltage and to regulate the grid-side power factor. In addition, control strategy is developed in the sense of Lyapunov stability theorem for the WPS. Simulation results using MATLAB/Simulink validate the effectiveness of the proposed controllers.

  4. Combined multi-analytical approach for study of pore system in bricks: How much porosity is there?

    Energy Technology Data Exchange (ETDEWEB)

    Coletti, Chiara, E-mail: chiara.coletti@studenti.unipd.it [Department of Geosciences, University of Padova, Via G. Gradenigo 6, 35131 Padova (Italy); Department of Mineralogy and Petrology, Faculty of Science, University of Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain); Cultrone, Giuseppe [Department of Mineralogy and Petrology, Faculty of Science, University of Granada, Avda. Fuentenueva s/n, 18002 Granada (Spain); Maritan, Lara; Mazzoli, Claudio [Department of Geosciences, University of Padova, Via G. Gradenigo 6, 35131 Padova (Italy)

    2016-11-15

    During the firing of bricks, mineralogical and textural transformations produce an artificial aggregate characterised by significant porosity. Particularly as regards pore-size distribution and the interconnection model, porosity is an important parameter to evaluate and predict the durability of bricks. The pore system is in fact the main element, which correlates building materials and their environment (especially in cases of aggressive weathering, e.g., salt crystallisation and freeze-thaw cycles) and determines their durability. Four industrial bricks with differing compositions and firing temperatures were analysed with “direct” and “indirect” techniques, traditional methods (mercury intrusion porosimetry, hydric tests, nitrogen adsorption) and new analytical approaches based on digital image reconstruction of 2D and 3D models (back-scattered electrons and computerised X-ray micro-Tomography, respectively). The comparison of results from different analytical methods in the “overlapping ranges” of porosity and the careful reconstruction of a cumulative curve, allowed overcoming their specific limitations and achieving better knowledge on the pore system of bricks. - Highlights: •Pore-size distribution and structure of the pore system in four commercial bricks •A multi-analytical approach combining “direct” and “indirect” techniques •Traditional methods vs. new approaches based on 2D/3D digital image reconstruction •The use of “overlapping ranges” to overcome the limitations of various techniques.

  5. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  6. A combined application of boundary-element and Runge-Kutta methods in three-dimensional elasticity and poroelasticity

    Directory of Open Access Journals (Sweden)

    Igumnov Leonid

    2015-01-01

    Full Text Available The report presents the development of the time-boundary element methodology and a description of the related software based on a stepped method of numerical inversion of the integral Laplace transform in combination with a family of Runge-Kutta methods for analyzing 3-D mixed initial boundary-value problems of the dynamics of inhomogeneous elastic and poro-elastic bodies. The results of the numerical investigation are presented. The investigation methodology is based on direct-approach boundary integral equations of 3-D isotropic linear theories of elasticity and poroelasticity in Laplace transforms. Poroelastic media are described using Biot models with four and five base functions. With the help of the boundary-element method, solutions in time are obtained, using the stepped method of numerically inverting Laplace transform on the nodes of Runge-Kutta methods. The boundary-element method is used in combination with the collocation method, local element-by-element approximation based on the matched interpolation model. The results of analyzing wave problems of the effect of a non-stationary force on elastic and poroelastic finite bodies, a poroelastic half-space (also with a fictitious boundary and a layered half-space weakened by a cavity, and a half-space with a trench are presented. Excitation of a slow wave in a poroelastic medium is studied, using the stepped BEM-scheme on the nodes of Runge-Kutta methods.

  7. Combination of acoustical radiosity and the image source method

    DEFF Research Database (Denmark)

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho

    2013-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  8. A new method for the determination of peak distribution across a two-dimensional separation space for the identification of optimal column combinations.

    Science.gov (United States)

    Leonhardt, Juri; Teutenberg, Thorsten; Buschmann, Greta; Gassner, Oliver; Schmidt, Torsten C

    2016-11-01

    For the identification of the optimal column combinations, a comparative orthogonality study of single columns and columns coupled in series for the first dimension of a microscale two-dimensional liquid chromatographic approach was performed. In total, eight columns or column combinations were chosen. For the assessment of the optimal column combination, the orthogonality value as well as the peak distributions across the first and second dimension was used. In total, three different methods of orthogonality calculation, namely the Convex Hull, Bin Counting, and Asterisk methods, were compared. Unfortunately, the first two methods do not provide any information of peak distribution. The third method provides this important information, but is not optimal when only a limited number of components are used for method development. Therefore, a new concept for peak distribution assessment across the separation space of two-dimensional chromatographic systems and clustering detection was developed. It could be shown that the Bin Counting method in combination with additionally calculated histograms for the respective dimensions is well suited for the evaluation of orthogonality and peak clustering. The newly developed method could be used generally in the assessment of 2D separations. Graphical Abstract ᅟ.

  9. Approach to the problem of combined radiation and environmental effect standardization

    International Nuclear Information System (INIS)

    Burykina, L.N.; Ajzina, N.L.; Vasil'eva, L.A.; Veselovskaya, K.A.; Likhachev, Yu.P.; Ponomareva, V.L.; Satarina, S.M.; Shmeleva, E.V.

    1978-01-01

    Rats were used to study combined forms of damage caused by radioactive substances with varioUs types of distribution ( 131 I and 147 Pm) and by external radiation sources (γ, X). Damage caused by radiation and dust factors was also studied. Synergism of the combined effect of the tolerance dose of 147 Pm introduced and preceding external general γ-irradiation was determined. The combined action of 131 I and external γ- and X-ray radiation exhibited an additional effect on rat thyroid glands. The combined action of dust and radiation factors showed that the biological effect depended on the dose abs.orbed in a critical organ (lungs). The results of the investigations point to an important role of critical organs (systems) and the degree of their radiosensitivity in response of body to combined internal and external irradiations. The facts presented show that the approach to standardizing radiation factors from the position of partial summation should be changed. This may be accomplished by using a combination factor which is determined experimentally and reflects a relative biological efficiency of the combined effects as compared to separate ones

  10. Assessing Crowdsourced POI Quality: Combining Methods Based on Reference Data, History, and Spatial Relations

    Directory of Open Access Journals (Sweden)

    Guillaume Touya

    2017-03-01

    Full Text Available With the development of location-aware devices and the success and high use of Web 2.0 techniques, citizens are able to act as sensors by contributing geographic information. In this context, data quality is an important aspect that should be taken into account when using this source of data for different purposes. The goal of the paper is to analyze the quality of crowdsourced data and to study its evolution over time. We propose two types of approaches: (1 use the intrinsic characteristics of the crowdsourced datasets; or (2 evaluate crowdsourced Points of Interest (POIs using external datasets (i.e., authoritative reference or other crowdsourced datasets, and two different methods for each approach. The potential of the combination of these approaches is then demonstrated, to overcome the limitations associated with each individual method. In this paper, we focus on POIs and places coming from the very successful crowdsourcing project: OpenStreetMap. The results show that the proposed approaches are complementary in assessing data quality. The positive results obtained for data matching show that the analysis of data quality through automatic data matching is possible but considerable effort and attention are needed for schema matching given the heterogeneity of OSM and the representation of authoritative datasets. For the features studied, it can be noted that change over time is sometimes due to disagreements between contributors, but in most cases the change improves the quality of the data.

  11. Innovative Method in Improving Communication Issues by Applying Interdisciplinary Approach. Psycholinguistic Perspective to Mitigate Communication Troubles During Cislunar Travel.

    Science.gov (United States)

    Anikushina, V.; Taratukhin, V.; Stutterheim, C. v.; Gushin, V.

    2018-02-01

    A new psycholinguistic view on the crew communication, combined with biochemical and psychological data, contributes to noninvasive methods for stress appraisal and proposes alternative approaches to improve in-group communication and cohesion.

  12. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  13. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    Science.gov (United States)

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  14. Combined SAFE/SNAP approach to safeguards evaluation

    International Nuclear Information System (INIS)

    Engi, D.; Chapman, L.D.; Grant, F.H.; Polito, J.

    1980-01-01

    Generally, the scope of a safeguards evaluation model can efficiently address one of two issues, (1) global safeguards effectiveness, or (2) vulnerability analysis for individual scenarios. The Safeguards Automated Facility Evaluation (SAFE) focuses on (1) while the Safeguards Network Analysis Procedure (SNAP) is directed at (2). SAFE addresses (1) in that it considers the entire facility, i.e., the composite system of hardware and human components, in one global analysis. SNAP addresses (2) by providing a safeguards modeling symbology sufficiently flexible to represent quite complex scenarios from the standpoint of hardware interfaces while also accounting for a rich variety of human decision making. A combined SAFE/SNAP approach to the problem of safeguards evaluation is described and illustrated through an example

  15. Inequalities and Duality in Gene Coexpression Networks of HIV-1 Infection Revealed by the Combination of the Double-Connectivity Approach and the Gini's Method

    Directory of Open Access Journals (Sweden)

    Chuang Ma

    2011-01-01

    Full Text Available The symbiosis (Sym and pathogenesis (Pat is a duality problem of microbial infection, including HIV/AIDS. Statistical analysis of inequalities and duality in gene coexpression networks (GCNs of HIV-1 infection may gain novel insights into AIDS. In this study, we focused on analysis of GCNs of uninfected subjects and HIV-1-infected patients at three different stages of viral infection based on data deposited in the GEO database of NCBI. The inequalities and duality in these GCNs were analyzed by the combination of the double-connectivity (DC approach and the Gini's method. DC analysis reveals that there are significant differences between positive and negative connectivity in HIV-1 stage-specific GCNs. The inequality measures of negative connectivity and edge weight are changed more significantly than those of positive connectivity and edge weight in GCNs from the HIV-1 uninfected to the AIDS stages. With the permutation test method, we identified a set of genes with significant changes in the inequality and duality measure of edge weight. Functional analysis shows that these genes are highly enriched for the immune system, which plays an essential role in the Sym-Pat duality (SPD of microbial infections. Understanding of the SPD problems of HIV-1 infection may provide novel intervention strategies for AIDS.

  16. Optimal control of open quantum systems: A combined surrogate Hamiltonian optimal control theory approach applied to photochemistry on surfaces

    International Nuclear Information System (INIS)

    Asplund, Erik; Kluener, Thorsten

    2012-01-01

    In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)]. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998); Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)]. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., (ℎ/2π)=m e =e=a 0 = 1, have been used unless otherwise stated.

  17. SENSITIVITY ANALYSIS as a methodical approach to the development of design strategies for environmentally sustainable buildings

    DEFF Research Database (Denmark)

    Hansen, Hanne Tine Ring

    . The research methodology applied in the project combines a literature study of descriptions of methodical approaches and built examples with a sensitivity analysis and a qualitative interview with two designers from a best practice example of a practice that has achieved environmentally sustainable...... architecture, such as: ecological, green, bio-climatic, sustainable, passive, low-energy and environmental architecture. This PhD project sets out to gain a better understanding of environmentally sustainable architecture and the methodical approaches applied in the development of this type of architecture...... an increase in scientific and political awareness, which has lead to an escalation in the number of research publications in the field, as well as, legislative demands for the energy consumption of buildings. The publications in the field refer to many different approaches to environmentally sustainable...

  18. Propagating Class and Method Combination

    DEFF Research Database (Denmark)

    Ernst, Erik

    1999-01-01

    number of implicit combinations. For example, it is possible to specify separate aspects of a family of classes, and then combine several aspects into a full-fledged class family. The combination expressions would explicitly combine whole-family aspects, and by propagation implicitly combine the aspects...

  19. Investigations of phosphate coatings of galvanized steel sheets by a surface-analytical multi-method approach

    International Nuclear Information System (INIS)

    Bubert, H.; Garten, R.; Klockenkaemper, R.; Puderbach, H.

    1983-01-01

    Corrosion protective coatings on galvanized steel sheets have been studied by a combination of SEM, EDX, AES, ISS and SIMS. Analytical statements concerning such rough, poly-crystalline and contaminated surfaces of technical samples are quite difficult to obtain. The use of a surface-analytical multi-method approach overcomes, the intrinsic limitations of the individual method applied, thus resulting in a consistent picture of those technical surfaces. Such results can be used to examine technical faults and to optimize the technical process. (Author)

  20. Integrated Transport Planning Framework Involving Combined Utility Regret Approach

    DEFF Research Database (Denmark)

    Wang, Yang; Monzon, Andres; Di Ciommo, Floridea

    2014-01-01

    Sustainable transport planning requires an integrated approach involving strategic planning, impact analysis, and multicriteria evaluation. This study aimed at relaxing the utility-based decision-making assumption by newly embedding anticipated-regret and combined utility regret decision mechanisms...... in a framework for integrated transport planning. The framework consisted of a two-round Delphi survey, integrated land use and transport model for Madrid, and multicriteria analysis. Results show that (a) the regret-based ranking has a similar mean but larger variance than the utility-based ranking does, (b......) the least-regret scenario forms a compromise between the desired and the expected scenarios, (c) the least-regret scenario can lead to higher user benefits in the short term and lower user benefits in the long term, (d) the utility-based, the regret-based, and the combined utility- and regret...

  1. Simulation of Semi-Solid Material Mechanical Behavior Using a Combined Discrete/Finite Element Method

    Science.gov (United States)

    Sistaninia, M.; Phillion, A. B.; Drezet, J.-M.; Rappaz, M.

    2011-01-01

    As a necessary step toward the quantitative prediction of hot tearing defects, a three-dimensional stress-strain simulation based on a combined finite element (FE)/discrete element method (DEM) has been developed that is capable of predicting the mechanical behavior of semisolid metallic alloys during solidification. The solidification model used for generating the initial solid-liquid structure is based on a Voronoi tessellation of randomly distributed nucleation centers and a solute diffusion model for each element of this tessellation. At a given fraction of solid, the deformation is then simulated with the solid grains being modeled using an elastoviscoplastic constitutive law, whereas the remaining liquid layers at grain boundaries are approximated by flexible connectors, each consisting of a spring element and a damper element acting in parallel. The model predictions have been validated against Al-Cu alloy experimental data from the literature. The results show that a combined FE/DEM approach is able to express the overall mechanical behavior of semisolid alloys at the macroscale based on the morphology of the grain structure. For the first time, the localization of strain in the intergranular regions is taken into account. Thus, this approach constitutes an indispensible step towards the development of a comprehensive model of hot tearing.

  2. Toward a Mixed-Methods Research Approach to Content Analysis in The Digital Age: The Combined Content-Analysis Model and its Applications to Health Care Twitter Feeds

    Science.gov (United States)

    Hamad, Eradah O; Savundranayagam, Marie Y; Holmes, Jeffrey D; Kinsella, Elizabeth Anne

    2016-01-01

    Background Twitter’s 140-character microblog posts are increasingly used to access information and facilitate discussions among health care professionals and between patients with chronic conditions and their caregivers. Recently, efforts have emerged to investigate the content of health care-related posts on Twitter. This marks a new area for researchers to investigate and apply content analysis (CA). In current infodemiology, infoveillance and digital disease detection research initiatives, quantitative and qualitative Twitter data are often combined, and there are no clear guidelines for researchers to follow when collecting and evaluating Twitter-driven content. Objective The aim of this study was to identify studies on health care and social media that used Twitter feeds as a primary data source and CA as an analysis technique. We evaluated the resulting 18 studies based on a narrative review of previous methodological studies and textbooks to determine the criteria and main features of quantitative and qualitative CA. We then used the key features of CA and mixed-methods research designs to propose the combined content-analysis (CCA) model as a solid research framework for designing, conducting, and evaluating investigations of Twitter-driven content. Methods We conducted a PubMed search to collect studies published between 2010 and 2014 that used CA to analyze health care-related tweets. The PubMed search and reference list checks of selected papers identified 21 papers. We excluded 3 papers and further analyzed 18. Results Results suggest that the methods used in these studies were not purely quantitative or qualitative, and the mixed-methods design was not explicitly chosen for data collection and analysis. A solid research framework is needed for researchers who intend to analyze Twitter data through the use of CA. Conclusions We propose the CCA model as a useful framework that provides a straightforward approach to guide Twitter-driven studies and that

  3. A Hybrid Machine Learning Method for Fusing fMRI and Genetic Data: Combining both Improves Classification of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Honghui Yang

    2010-10-01

    Full Text Available We demonstrate a hybrid machine learning method to classify schizophrenia patients and healthy controls, using functional magnetic resonance imaging (fMRI and single nucleotide polymorphism (SNP data. The method consists of four stages: (1 SNPs with the most discriminating information between the healthy controls and schizophrenia patients are selected to construct a support vector machine ensemble (SNP-SVME. (2 Voxels in the fMRI map contributing to classification are selected to build another SVME (Voxel-SVME. (3 Components of fMRI activation obtained with independent component analysis (ICA are used to construct a single SVM classifier (ICA-SVMC. (4 The above three models are combined into a single module using a majority voting approach to make a final decision (Combined SNP-fMRI. The method was evaluated by a fully-validated leave-one-out method using 40 subjects (20 patients and 20 controls. The classification accuracy was: 0.74 for SNP-SVME, 0.82 for Voxel-SVME, 0.83 for ICA-SVMC, and 0.87 for Combined SNP-fMRI. Experimental results show that better classification accuracy was achieved by combining genetic and fMRI data than using either alone, indicating that genetic and brain function representing different, but partially complementary aspects, of schizophrenia etiopathology. This study suggests an effective way to reassess biological classification of individuals with schizophrenia, which is also potentially useful for identifying diagnostically important markers for the disorder.

  4. An approach to combining heuristic and qualitative reasoning in an expert system

    Science.gov (United States)

    Jiang, Wei-Si; Han, Chia Yung; Tsai, Lian Cheng; Wee, William G.

    1988-01-01

    An approach to combining the heuristic reasoning from shallow knowledge and the qualitative reasoning from deep knowledge is described. The shallow knowledge is represented in production rules and under the direct control of the inference engine. The deep knowledge is represented in frames, which may be put in a relational DataBase Management System. This approach takes advantage of both reasoning schemes and results in improved efficiency as well as expanded problem solving ability.

  5. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  6. A survey of approaches combining safety and security for industrial control systems

    International Nuclear Information System (INIS)

    Kriaa, Siwar; Pietre-Cambacedes, Ludovic; Bouissou, Marc; Halgand, Yoran

    2015-01-01

    The migration towards digital control systems creates new security threats that can endanger the safety of industrial infrastructures. Addressing the convergence of safety and security concerns in this context, we provide a comprehensive survey of existing approaches to industrial facility design and risk assessment that consider both safety and security. We also provide a comparative analysis of the different approaches identified in the literature. - Highlights: • We raise awareness of safety and security convergence in numerical control systems. • We highlight safety and security interdependencies for modern industrial systems. • We give a survey of approaches combining safety and security engineering. • We discuss the potential of the approaches to model safety and security interactions

  7. Simulation of dose deposition in stereotactic synchrotron radiation therapy: a fast approach combining Monte Carlo and deterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr

    2009-08-07

    A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.

  8. Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling.

    Science.gov (United States)

    Yang, S; Wang, D

    2000-01-01

    This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.

  9. A simple method to combine multiple molecular biomarkers for dichotomous diagnostic classification

    Directory of Open Access Journals (Sweden)

    Amin Manik A

    2006-10-01

    Full Text Available Abstract Background In spite of the recognized diagnostic potential of biomarkers, the quest for squelching noise and wringing in information from a given set of biomarkers continues. Here, we suggest a statistical algorithm that – assuming each molecular biomarker to be a diagnostic test – enriches the diagnostic performance of an optimized set of independent biomarkers employing established statistical techniques. We validated the proposed algorithm using several simulation datasets in addition to four publicly available real datasets that compared i subjects having cancer with those without; ii subjects with two different cancers; iii subjects with two different types of one cancer; and iv subjects with same cancer resulting in differential time to metastasis. Results Our algorithm comprises of three steps: estimating the area under the receiver operating characteristic curve for each biomarker, identifying a subset of biomarkers using linear regression and combining the chosen biomarkers using linear discriminant function analysis. Combining these established statistical methods that are available in most statistical packages, we observed that the diagnostic accuracy of our approach was 100%, 99.94%, 96.67% and 93.92% for the real datasets used in the study. These estimates were comparable to or better than the ones previously reported using alternative methods. In a synthetic dataset, we also observed that all the biomarkers chosen by our algorithm were indeed truly differentially expressed. Conclusion The proposed algorithm can be used for accurate diagnosis in the setting of dichotomous classification of disease states.

  10. Why do fearful facial expressions elicit behavioral approach? Evidence from a combined approach-avoidance implicit association test.

    Science.gov (United States)

    Hammer, Jennifer L; Marsh, Abigail A

    2015-04-01

    Despite communicating a "negative" emotion, fearful facial expressions predominantly elicit behavioral approach from perceivers. It has been hypothesized that this seemingly paradoxical effect may occur due to fearful expressions' resemblance to vulnerable, infantile faces. However, this hypothesis has not yet been tested. We used a combined approach-avoidance/implicit association test (IAT) to test this hypothesis. Participants completed an approach-avoidance lever task during which they responded to fearful and angry facial expressions as well as neutral infant and adult faces presented in an IAT format. Results demonstrated an implicit association between fearful facial expressions and infant faces and showed that both fearful expressions and infant faces primarily elicit behavioral approach. The dominance of approach responses to both fearful expressions and infant faces decreased as a function of psychopathic personality traits. Results suggest that the prosocial responses to fearful expressions observed in most individuals may stem from their associations with infantile faces. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  11. A combined approach of physicochemical and biological methods for the characterization of petroleum hydrocarbon-contaminated soil.

    Science.gov (United States)

    Masakorala, Kanaji; Yao, Jun; Chandankere, Radhika; Liu, Haijun; Liu, Wenjuan; Cai, Minmin; Choi, Martin M F

    2014-01-01

    Main physicochemical and microbiological parameters of collected petroleum-contaminated soils with different degrees of contamination from DaGang oil field (southeast of Tianjin, northeast China) were comparatively analyzed in order to assess the influence of petroleum contaminants on the physicochemical and microbiological properties of soil. An integration of microcalorimetric technique with urease enzyme analysis was used with the aim to assess a general status of soil metabolism and the potential availability of nitrogen nutrient in soils stressed by petroleum-derived contaminants. The total petroleum hydrocarbon (TPH) content of contaminated soils varied from 752.3 to 29,114 mg kg(−1). Although the studied physicochemical and biological parameters showed variations dependent on TPH content, the correlation matrix showed also highly significant correlation coefficients among parameters, suggesting their utility in describing a complex matrix such as soil even in the presence of a high level of contaminants. The microcalorimetric measures gave evidence of microbial adaptation under highest TPH concentration; this would help in assessing the potential of a polluted soil to promote self-degradation of oil-derived hydrocarbon under natural or assisted remediation. The results highlighted the importance of the application of combined approach in the study of those parameters driving the soil amelioration and bioremediation.

  12. A combined evidence Bayesian method for human ancestry inference applied to Afro-Colombians.

    Science.gov (United States)

    Rishishwar, Lavanya; Conley, Andrew B; Vidakovic, Brani; Jordan, I King

    2015-12-15

    Uniparental genetic markers, mitochondrial DNA (mtDNA) and Y chromosomal DNA, are widely used for the inference of human ancestry. However, the resolution of ancestral origins based on mtDNA haplotypes is limited by the fact that such haplotypes are often found to be distributed across wide geographical regions. We have addressed this issue here by combining two sources of ancestry information that have typically been considered separately: historical records regarding population origins and genetic information on mtDNA haplotypes. To combine these distinct data sources, we applied a Bayesian approach that considers historical records, in the form of prior probabilities, together with data on the geographical distribution of mtDNA haplotypes, formulated as likelihoods, to yield ancestry assignments from posterior probabilities. This combined evidence Bayesian approach to ancestry assignment was evaluated for its ability to accurately assign sub-continental African ancestral origins to Afro-Colombians based on their mtDNA haplotypes. We demonstrate that the incorporation of historical prior probabilities via this analytical framework can provide for substantially increased resolution in sub-continental African ancestry assignment for members of this population. In addition, a personalized approach to ancestry assignment that involves the tuning of priors to individual mtDNA haplotypes yields even greater resolution for individual ancestry assignment. Despite the fact that Colombia has a large population of Afro-descendants, the ancestry of this community has been understudied relative to populations with primarily European and Native American ancestry. Thus, the application of the kind of combined evidence approach developed here to the study of ancestry in the Afro-Colombian population has the potential to be impactful. The formal Bayesian analytical framework we propose for combining historical and genetic information also has the potential to be widely applied

  13. A combination of genetic algorithm and particle swarm optimization method for solving traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Keivan Borna

    2015-12-01

    Full Text Available Traveling salesman problem (TSP is a well-established NP-complete problem and many evolutionary techniques like particle swarm optimization (PSO are used to optimize existing solutions for that. PSO is a method inspired by the social behavior of birds. In PSO, each member will change its position in the search space, according to personal or social experience of the whole society. In this paper, we combine the principles of PSO and crossover operator of genetic algorithm to propose a heuristic algorithm for solving the TSP more efficiently. Finally, some experimental results on our algorithm are applied in some instances in TSPLIB to demonstrate the effectiveness of our methods which also show that our algorithm can achieve better results than other approaches.

  14. Clinical treatment approach of a child with molar incisor hypomineralization (MIH combined with malocclusion.

    Directory of Open Access Journals (Sweden)

    Rossitza Kabaktchieva

    2012-04-01

    Full Text Available Introduction. Molar incisor hypomineralization (MIH was defined as "hypomineralisation of systemic origin of permanent first molars, frequently associated with affected incisors". MIH includes the presence of demarcated opacity, post eruptive enamel breakdown, atypical restoration. Тhe approach to management suggested: risk identification, early diagnosis, remineralization for prevention of caries and post eruption breakdown, restorations. The clinicians very seldom notice that children with MIH usually have both- hypomineralisation and malocclusions, and they do not discuss combine treatment plan.Aim. To present our interdisciplinary approach to a patient with MIH, combined with malocclusion.Material and methods. We are presenting 9 year old child with contusio and fractura coronae dentis noncomplicata, distal occlusion, overjet, overbite and retrusion. Two consecutive stages were defined: First stage:- Professional oral hygiene and local remineralisation therapy- Vital pulp therapy of tooth 21 - Space gaining for restoration of the lost height of the molars by the means of posterior bite-plane removable appliance- Restoration of the molars with metal inlays- Lingual tipping of the lower incisorsSecond stage:- Class II correction- Growth control Results.First phase: - The tooth 21 was restored with aesthetic composite material;- Occlusion was raised with occlusal restorations (inleys and orthodontic appliance. Second phase:Medialisation of mandible and holding maxillary growth with functional appliance and occipital EOA until class one occlusal relations.Conclusion. Children with MIH should be examined and treated complex in collaboration with orthodontist and if necessary by other specialists too.

  15. Monitoring hemodynamics and oxygenation of the kidney in rats by a combined near-infrared spectroscopy and invasive probe approach

    Science.gov (United States)

    Grosenick, Dirk; Cantow, Kathleen; Arakelyan, Karen; Wabnitz, Heidrun; Flemming, Bert; Skalweit, Angela; Ladwig, Mechthild; Macdonald, Rainer; Niendorf, Thoralf; Seeliger, Erdmann

    2015-07-01

    We have developed a hybrid approach to investigate the dynamics of perfusion and oxygenation in the kidney of rats under pathophysiologically relevant conditions. Our approach combines near-infrared spectroscopy to quantify hemoglobin concentration and oxygen saturation in the renal cortex, and an invasive probe method for measuring total renal blood flow by an ultrasonic probe, perfusion by laser-Doppler fluxmetry, and tissue oxygen tension via fluorescence quenching. Hemoglobin concentration and oxygen saturation were determined from experimental data by a Monte Carlo model. The hybrid approach was applied to investigate and compare temporal changes during several types of interventions such as arterial and venous occlusions, as well as hyperoxia, hypoxia and hypercapnia induced by different mixtures of the inspired gas. The approach was also applied to study the effects of the x-ray contrast medium iodixanol on the kidney.

  16. On the complexity of a combined homotopy interior method for convex programming

    Science.gov (United States)

    Yu, Bo; Xu, Qing; Feng, Guochen

    2007-03-01

    In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.

  17. Vulnerability assessment of the Toluca Valley aquifer combining a parametric approach and advective transport

    International Nuclear Information System (INIS)

    Gárfias, J.; Llanos, H.; Franco, R.; Martel, R.

    2017-01-01

    Groundwater vulnerability assessment is an important task in water resources and land management. Depending on the availability of data and the complexity of the hydrogeological conditions, different approaches can be adopted. As an alternative, this study involves the use of a combined approach based on vulnerability methods and advective particle tracking to better understand the susceptibility to contamination in the Toluca valley aquifer. An intrinsic vulnerability map (DRASTIC) was used to identify areas that are more susceptible to ground water contamination. To estimate advective particle tracking, we developed a 3D flow model using VisualModflow and MODPATH to describe the regional flow of groundwater. The vulnerability map demonstrates the problematic application and interpretation of qualitative the vulnerability method of the parametric system group, which indicates a difference of approximately 23% when compared with the modified vulnerability map. Potential contamination sources based on landfill sites were comparatively high; approximately 76% are located in areas that could be susceptible to contamination through vertical infiltration, especially those that are located along the Lerma system of wells. Industrial parks located in the centre of the valley (83%), where continuous extraction of groundwater and land subsidence occurs, have been classified as high vulnerability zones, increasing the risk of contaminants from surface sources reaching the groundwater. In order to understand the susceptibility to contamination in the aquifer, various delineation approaches should be adopted and all the results that validate each other should be considered, thus making a good strategy for implementing different degrees of protection measures. [es

  18. A robust combination approach for short-term wind speed forecasting and analysis – Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model

    International Nuclear Information System (INIS)

    Wang, Jianzhou; Hu, Jianming

    2015-01-01

    With the increasing importance of wind power as a component of power systems, the problems induced by the stochastic and intermittent nature of wind speed have compelled system operators and researchers to search for more reliable techniques to forecast wind speed. This paper proposes a combination model for probabilistic short-term wind speed forecasting. In this proposed hybrid approach, EWT (Empirical Wavelet Transform) is employed to extract meaningful information from a wind speed series by designing an appropriate wavelet filter bank. The GPR (Gaussian Process Regression) model is utilized to combine independent forecasts generated by various forecasting engines (ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM)) in a nonlinear way rather than the commonly used linear way. The proposed approach provides more probabilistic information for wind speed predictions besides improving the forecasting accuracy for single-value predictions. The effectiveness of the proposed approach is demonstrated with wind speed data from two wind farms in China. The results indicate that the individual forecasting engines do not consistently forecast short-term wind speed for the two sites, and the proposed combination method can generate a more reliable and accurate forecast. - Highlights: • The proposed approach can make probabilistic modeling for wind speed series. • The proposed approach adapts to the time-varying characteristic of the wind speed. • The hybrid approach can extract the meaningful components from the wind speed series. • The proposed method can generate adaptive, reliable and more accurate forecasting results. • The proposed model combines four independent forecasting engines in a nonlinear way.

  19. A typology of health marketing research methods--combining public relations methods with organizational concern.

    Science.gov (United States)

    Rotarius, Timothy; Wan, Thomas T H; Liberman, Aaron

    2007-01-01

    Research plays a critical role throughout virtually every conduit of the health services industry. The key terms of research, public relations, and organizational interests are discussed. Combining public relations as a strategic methodology with the organizational concern as a factor, a typology of four different research methods emerges. These four health marketing research methods are: investigative, strategic, informative, and verification. The implications of these distinct and contrasting research methods are examined.

  20. Effect of combined teaching method (role playing and storytelling ...

    African Journals Online (AJOL)

    Effect of combined teaching method (role playing and storytelling) on creative ... Remember me ... Background and Purpose: Storytelling promotes imagination and satisfies curiosity in children and creates learning opportunities in them.

  1. Determination of alcohol and extract concentration in beer samples using a combined method of near-infrared (NIR) spectroscopy and refractometry.

    Science.gov (United States)

    Castritius, Stefan; Kron, Alexander; Schäfer, Thomas; Rädle, Matthias; Harms, Diedrich

    2010-12-22

    A new approach of combination of near-infrared (NIR) spectroscopy and refractometry was developed in this work to determine the concentration of alcohol and real extract in various beer samples. A partial least-squares (PLS) regression, as multivariate calibration method, was used to evaluate the correlation between the data of spectroscopy/refractometry and alcohol/extract concentration. This multivariate combination of spectroscopy and refractometry enhanced the precision in the determination of alcohol, compared to single spectroscopy measurements, due to the effect of high extract concentration on the spectral data, especially of nonalcoholic beer samples. For NIR calibration, two mathematical pretreatments (first-order derivation and linear baseline correction) were applied to eliminate light scattering effects. A sample grouping of the refractometry data was also applied to increase the accuracy of the determined concentration. The root mean squared errors of validation (RMSEV) of the validation process concerning alcohol and extract concentration were 0.23 Mas% (method A), 0.12 Mas% (method B), and 0.19 Mas% (method C) and 0.11 Mas% (method A), 0.11 Mas% (method B), and 0.11 Mas% (method C), respectively.

  2. New Combined Electron-Beam Methods of Wastewater Purification

    International Nuclear Information System (INIS)

    Pikaev, A.K.; Makarov, I.E.; Ponomarev, A.V.; Kartasheva, L.I.; Podzorova, E.A.; Chulkov, V.N.; Han, B.; Kim, D.K.

    1999-01-01

    The paper is a brief review of the results obtained with the participation of the authors from the study on combined electron-beam methods for purification of some wastewaters. The data on purification of wastewaters containing dyes or hydrogen peroxide and municipal wastewater in the aerosol flow are considered

  3. The e/h method of energy reconstruction for combined calorimeter

    International Nuclear Information System (INIS)

    Kul'chitskij, Yu.A.; Kuz'min, M.V.; Vinogradov, V.B.

    1999-01-01

    The new simple method of the energy reconstruction for a combined calorimeter, which we called the e/h method, is suggested. It uses only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. The method has been tested on the basis of the 1996 test beam data of the ATLAS barrel combined calorimeter and demonstrated the correctness of the reconstruction of the mean values of energies. The obtained fractional energy resolution is [(58 ± 3)%/√E + (2.5 ± 0.3)%] O+ (1.7 ± 0.2) GeV/E. This algorithm can be used for the fast energy reconstruction in the first level trigger

  4. Suspended sediment assessment by combining sound attenuation and backscatter measurements - analytical method and experimental validation

    Science.gov (United States)

    Guerrero, Massimo; Di Federico, Vittorio

    2018-03-01

    The use of acoustic techniques has become common for estimating suspended sediment in water environments. An emitted beam propagates into water producing backscatter and attenuation, which depend on scattering particles concentration and size distribution. Unfortunately, the actual particles size distribution (PSD) may largely affect the accuracy of concentration quantification through the unknown coefficients of backscattering strength, ks2, and normalized attenuation, ζs. This issue was partially solved by applying the multi-frequency approach. Despite this possibility, a relevant scientific and practical question remains regarding the possibility of using acoustic methods to investigate poorly sorted sediment in the spectrum ranging from clay to fine sand. The aim of this study is to investigate the possibility of combining the measurement of sound attenuation and backscatter to determine ζs for the suspended particles and the corresponding concentration. The proposed method is moderately dependent from actual PSD, thus relaxing the need of frequent calibrations to account for changes in ks2 and ζs coefficients. Laboratory tests were conducted under controlled conditions to validate this measurement technique. With respect to existing approaches, the developed method more accurately estimates the concentration of suspended particles ranging from clay to fine sand and, at the same time, gives an indication on their actual PSD.

  5. Entropy method combined with extreme learning machine method for the short-term photovoltaic power generation forecasting

    International Nuclear Information System (INIS)

    Tang, Pingzhou; Chen, Di; Hou, Yushuo

    2016-01-01

    As the world’s energy problem becomes more severe day by day, photovoltaic power generation has opened a new door for us with no doubt. It will provide an effective solution for this severe energy problem and meet human’s needs for energy if we can apply photovoltaic power generation in real life, Similar to wind power generation, photovoltaic power generation is uncertain. Therefore, the forecast of photovoltaic power generation is very crucial. In this paper, entropy method and extreme learning machine (ELM) method were combined to forecast a short-term photovoltaic power generation. First, entropy method is used to process initial data, train the network through the data after unification, and then forecast electricity generation. Finally, the data results obtained through the entropy method with ELM were compared with that generated through generalized regression neural network (GRNN) and radial basis function neural network (RBF) method. We found that entropy method combining with ELM method possesses higher accuracy and the calculation is faster.

  6. Semi top-down method combined with earth-bank, an effective method for basement construction.

    Science.gov (United States)

    Tuan, B. Q.; Tam, Ng M.

    2018-04-01

    Choosing an appropriate method of deep excavation not only plays a decisive role in technical success, but also in economics of the construction project. Presently, we mainly base on to key methods: “Bottom-up” and “Top-down” construction method. Right now, this paper presents an another method of construction that is “Semi Top-down method combining with earth-bank” in order to take the advantages and limit the weakness of the above methods. The Bottom-up method was improved by using the earth-bank to stabilize retaining walls instead of the bracing steel struts. The Top-down method was improved by using the open cut method for the half of the earthwork quantities.

  7. Combination of real options and game-theoretic approach in investment analysis

    Science.gov (United States)

    Arasteh, Abdollah

    2016-09-01

    Investments in technology create a large amount of capital investments by major companies. Assessing such investment projects is identified as critical to the efficient assignment of resources. Viewing investment projects as real options, this paper expands a method for assessing technology investment decisions in the linkage existence of uncertainty and competition. It combines the game-theoretic models of strategic market interactions with a real options approach. Several key characteristics underlie the model. First, our study shows how investment strategies rely on competitive interactions. Under the force of competition, firms hurry to exercise their options early. The resulting "hurry equilibrium" destroys the option value of waiting and involves violent investment behavior. Second, we get best investment policies and critical investment entrances. This suggests that integrating will be unavoidable in some information product markets. The model creates some new intuitions into the forces that shape market behavior as noticed in the information technology industry. It can be used to specify best investment policies for technology innovations and adoptions, multistage R&D, and investment projects in information technology.

  8. Integrative health care method based on combined complementary ...

    African Journals Online (AJOL)

    Background: There are various models of health care, such as the ... sociological, economic, systemic of Neuman, cognitive medicine or ecological, ayurvedic, ... 2013, with a comprehensive approach in 64 patients using the clinical method.

  9. A Combined Approach to Measure Micropollutant Behaviour during Riverbank Filtration

    Science.gov (United States)

    van Driezum, Inge; Saracevic, Ernis; Derx, Julia; Kirschner, Alexander; Sommer, Regina; Farnleitner, Andreas; Blaschke, Alfred Paul

    2016-04-01

    Riverbank filtration (RBF) systems are widely used as natural treatment process. The advantages of RBF over surface water abstraction are the elimination of for example suspended solids, biodegradable compounds (like specific micropollutants), bacteria and viruses (Hiscock and Grischek, 2002). However, in contrast to its importance, remarkably less is known on the respective external (e.g. industrial or municipal sewage) and the internal (e.g. wildlife and agricultural influence) sources of contaminants, the environmental availability and fate of the various hazardous substances, and its potential transport during soil and aquifer passage. The goal of this study is to get an insight in the behaviour of various micropollutants and microbial indicators during riverbank filtration. Field measurements were combined with numerical modelling approaches. The study area comprises an alluvial backwater and floodplain area downstream of Vienna. The river is highly dynamic, with discharges ranging from 900 m3/s during low flow to 11000 m3/s during flood events. Samples were taken in several monitoring wells along a transect extending from the river towards a backwater river in the floodplain. Three of the piezometers were situated in the first 20 meters away from the river in order to obtain information about micropollutant behaviour close to the river. A total of 9 different micropollutants were analysed in grab samples taken under different river flow conditions (n=33). Following enrichment using SPE, analysis was performed using high performance liquid chromatography-tandem mass spectrometry. Faecal indicators (E. coli and enterococci) and bacterial spores were enumerated in sample volumes of 1 L each using cultivation based methods (ISO 16649-1, ISO 7899-2:2000 and ISO 6222). The analysis showed that some compounds, e.g. ibuprofen and diclofenac, were only found in the river. These compounds were already degraded in the first ten meters away from the river. Analysis of

  10. IFRS and US GAAP convergence in the area of business combination

    Directory of Open Access Journals (Sweden)

    Hana Bohušová

    2008-01-01

    Full Text Available The IASB project "Business Combinations" started in 2001. The increase of financial statements quality and international harmonization of business combinations recording were main objectives of this project. The project was divided into two phases. IFRS 3 (2004 Business Combinations" which replaced former IAS 22 was the result of the first phase. Partial harmonization of recording and financial reporting of business combination in Europe and in the USA was the main objective of IFRS 3 (2004. IFRS 3 (2004 is based on SFAS 141 (2001. SFAS 141 (2001 was developed in 2001 and replaced APB (Accounting Principles Board Opinion No. 16 Business Combinations and SFAS 38. There is the pooling interest method forbidden and only the purchase method is allowed for all kinds of business combinations. Based on the comparison of both methodical approaches to business combinations are demonstrated reasons for refusing of pooling interest method. The second phase is aimed at purchase method application and new methodical approaches to business combination recording and it is the objective of the conclusion of this paper.

  11. Automated lesion detection on MRI scans using combined unsupervised and supervised methods

    International Nuclear Information System (INIS)

    Guo, Dazhou; Fridriksson, Julius; Fillmore, Paul; Rorden, Christopher; Yu, Hongkai; Zheng, Kang; Wang, Song

    2015-01-01

    Accurate and precise detection of brain lesions on MR images (MRI) is paramount for accurately relating lesion location to impaired behavior. In this paper, we present a novel method to automatically detect brain lesions from a T1-weighted 3D MRI. The proposed method combines the advantages of both unsupervised and supervised methods. First, unsupervised methods perform a unified segmentation normalization to warp images from the native space into a standard space and to generate probability maps for different tissue types, e.g., gray matter, white matter and fluid. This allows us to construct an initial lesion probability map by comparing the normalized MRI to healthy control subjects. Then, we perform non-rigid and reversible atlas-based registration to refine the probability maps of gray matter, white matter, external CSF, ventricle, and lesions. These probability maps are combined with the normalized MRI to construct three types of features, with which we use supervised methods to train three support vector machine (SVM) classifiers for a combined classifier. Finally, the combined classifier is used to accomplish lesion detection. We tested this method using T1-weighted MRIs from 60 in-house stroke patients. Using leave-one-out cross validation, the proposed method can achieve an average Dice coefficient of 73.1 % when compared to lesion maps hand-delineated by trained neurologists. Furthermore, we tested the proposed method on the T1-weighted MRIs in the MICCAI BRATS 2012 dataset. The proposed method can achieve an average Dice coefficient of 66.5 % in comparison to the expert annotated tumor maps provided in MICCAI BRATS 2012 dataset. In addition, on these two test datasets, the proposed method shows competitive performance to three state-of-the-art methods, including Stamatakis et al., Seghier et al., and Sanjuan et al. In this paper, we introduced a novel automated procedure for lesion detection from T1-weighted MRIs by combining both an unsupervised and a

  12. Combined approach to reduced duration integrated leakage rate testing

    International Nuclear Information System (INIS)

    Galanti, P.J.

    1987-01-01

    Even though primary reactor containment allowable leakage rates are expressed in weight percent per day of contained air, engineers have been attempting to define acceptable methods to test in < 24 h as long as these tests have been performed. The reasons to reduce testing duration are obvious, because time not generating electricity is time not generating revenue for the utilities. The latest proposed revision to 10CFR50 Appendix J, concerning integrated leakage rate testing (ILRTs), was supplemented with a draft regulatory guide proposing yet another method. This paper proposes a method that includes elements of currently accepted concepts for short duration testing with a standard statistical check for criteria acceptance. Following presentation of the method, several cases are presented showing the results of these combined criteria

  13. 5th International Workshop on Combinations of Intelligent Methods and Applications

    CERN Document Server

    Palade, Vasile; Prentzas, Jim

    2017-01-01

    Complex problems usually cannot be solved by individual methods or techniques and require the synergism of more than one of them to be solved. This book presents a number of current efforts that use combinations of methods or techniques to solve complex problems in the areas of sentiment analysis, search in GIS, graph-based social networking, intelligent e-learning systems, data mining and recommendation systems. Most of them are connected with specific applications, whereas the rest are combinations based on principles. Most of the chapters are extended versions of the corresponding papers presented in CIMA-15 Workshop, which took place in conjunction with IEEE ICTAI-15, in November 2015. The rest are invited papers that responded to special call for papers for the book. The book is addressed to researchers and practitioners from academia or industry, who are interested in using combined methods in solving complex problems in the above areas.

  14. Sound transmission analysis of plate structures using the finite element method and elementary radiator approach with radiator error index

    DEFF Research Database (Denmark)

    Jung, Jaesoon; Kook, Junghwan; Goo, Seongyeol

    2017-01-01

    combines the FEM and Elementary Radiator Approach (ERA) is proposed. The FE-ERA method analyzes the vibrational response of the plate structure excited by incident sound using FEM and then computes the transmitted acoustic pressure from the vibrating plate using ERA. In order to improve the accuracy...... and efficiency of the FE-ERA method, a novel criterion for the optimal number of elementary radiators is proposed. The criterion is based on the radiator error index that is derived to estimate the accuracy of the computation with used number of radiators. Using the proposed criterion a radiator selection method...... is presented for determining the optimum number of radiators. The presented radiator selection method and the FE-ERA method are combined to improve the computational accuracy and efficiency. Several numerical examples that have been rarely addressed in previous studies, are presented with the proposed method...

  15. Formalized Conflicts Detection Based on the Analysis of Multiple Emails: An Approach Combining Statistics and Ontologies

    Science.gov (United States)

    Zakaria, Chahnez; Curé, Olivier; Salzano, Gabriella; Smaïli, Kamel

    In Computer Supported Cooperative Work (CSCW), it is crucial for project leaders to detect conflicting situations as early as possible. Generally, this task is performed manually by studying a set of documents exchanged between team members. In this paper, we propose a full-fledged automatic solution that identifies documents, subjects and actors involved in relational conflicts. Our approach detects conflicts in emails, probably the most popular type of documents in CSCW, but the methods used can handle other text-based documents. These methods rely on the combination of statistical and ontological operations. The proposed solution is decomposed in several steps: (i) we enrich a simple negative emotion ontology with terms occuring in the corpus of emails, (ii) we categorize each conflicting email according to the concepts of this ontology and (iii) we identify emails, subjects and team members involved in conflicting emails using possibilistic description logic and a set of proposed measures. Each of these steps are evaluated and validated on concrete examples. Moreover, this approach's framework is generic and can be easily adapted to domains other than conflicts, e.g. security issues, and extended with operations making use of our proposed set of measures.

  16. Combining sap flow and eddy covariance approaches to derive stomatal and non-stomatal O3 fluxes in a forest stand

    International Nuclear Information System (INIS)

    Nunn, A.J.; Cieslik, S.; Metzger, U.; Wieser, G.; Matyssek, R.

    2010-01-01

    Stomatal O 3 fluxes to a mixed beech/spruce stand (Fagus sylvatica/Picea abies) in Central Europe were determined using two different approaches. The sap flow technique yielded the tree-level transpiration, whereas the eddy covariance method provided the stand-level evapotranspiration. Both data were then converted into stomatal ozone fluxes, exemplifying this novel concept for July 2007. Sap flow-based stomatal O 3 flux was 33% of the total O 3 flux, whereas derivation from evapotranspiration rates in combination with the Penman-Monteith algorithm amounted to 47%. In addition to this proportional difference, the sap flow-based assessment yielded lower levels of stomatal O 3 flux and reflected stomatal regulation rather than O 3 exposure, paralleling the daily courses of canopy conductance for water vapor and eddy covariance-based total stand-level O 3 flux. The demonstrated combination of sap flow and eddy covariance approaches supports the development of O 3 risk assessment in forests from O 3 exposure towards flux-based concepts. - Combined tree sap flow and eddy covariance-based methodologies yield stomatal O 3 flux as 33% in total stand flux.

  17. Expert judgement combination using moment methods

    International Nuclear Information System (INIS)

    Wisse, Bram; Bedford, Tim; Quigley, John

    2008-01-01

    Moment methods have been employed in decision analysis, partly to avoid the computational burden that decision models involving continuous probability distributions can suffer from. In the Bayes linear (BL) methodology prior judgements about uncertain quantities are specified using expectation (rather than probability) as the fundamental notion. BL provides a strong foundation for moment methods, rooted in work of De Finetti and Goldstein. The main objective of this paper is to discuss in what way expert assessments of moments can be combined, in a non-Bayesian way, to construct a prior assessment. We show that the linear pool can be justified in an analogous but technically different way to linear pools for probability assessments, and that this linear pool has a very convenient property: a linear pool of experts' assessments of moments is coherent if each of the experts has given coherent assessments. To determine the weights of the linear pool we give a method of performance based weighting analogous to Cooke's classical model and explore its properties. Finally, we compare its performance with the classical model on data gathered in applications of the classical model

  18. Toward a Mixed-Methods Research Approach to Content Analysis in The Digital Age: The Combined Content-Analysis Model and its Applications to Health Care Twitter Feeds.

    Science.gov (United States)

    Hamad, Eradah O; Savundranayagam, Marie Y; Holmes, Jeffrey D; Kinsella, Elizabeth Anne; Johnson, Andrew M

    2016-03-08

    Twitter's 140-character microblog posts are increasingly used to access information and facilitate discussions among health care professionals and between patients with chronic conditions and their caregivers. Recently, efforts have emerged to investigate the content of health care-related posts on Twitter. This marks a new area for researchers to investigate and apply content analysis (CA). In current infodemiology, infoveillance and digital disease detection research initiatives, quantitative and qualitative Twitter data are often combined, and there are no clear guidelines for researchers to follow when collecting and evaluating Twitter-driven content. The aim of this study was to identify studies on health care and social media that used Twitter feeds as a primary data source and CA as an analysis technique. We evaluated the resulting 18 studies based on a narrative review of previous methodological studies and textbooks to determine the criteria and main features of quantitative and qualitative CA. We then used the key features of CA and mixed-methods research designs to propose the combined content-analysis (CCA) model as a solid research framework for designing, conducting, and evaluating investigations of Twitter-driven content. We conducted a PubMed search to collect studies published between 2010 and 2014 that used CA to analyze health care-related tweets. The PubMed search and reference list checks of selected papers identified 21 papers. We excluded 3 papers and further analyzed 18. Results suggest that the methods used in these studies were not purely quantitative or qualitative, and the mixed-methods design was not explicitly chosen for data collection and analysis. A solid research framework is needed for researchers who intend to analyze Twitter data through the use of CA. We propose the CCA model as a useful framework that provides a straightforward approach to guide Twitter-driven studies and that adds rigor to health care social media

  19. Assessment of MYCN amplification status in Tunisian neuroblastoma: CISH and MLPA combining approach.

    Science.gov (United States)

    H'Mida Ben Brahim, Dorra; Trabelsi, Saoussen; Chabchoub, Imen; Gargouri, Inesse; Harrabi, Imed; Moussa, Adnene; Chourabi, Maroua; Haddaji, Marwa; Sassi, Sihem; Mougou, Soumaya; Gribaa, Moez; Ben Ahmed, Slim; Zakhama, Abdelfattah; Nouri, Abdellatif; Saad, Ali

    2015-01-01

    Neuroblastoma (NB) shows a complex combination of genetic aberrations. Some of them represent poor genetic prognosis factors that require specific and intensive chemotherapy. MYCN amplification consists of the major bad outcome prognostic factor, it is indeed frequently observed in aggressive neuroblastomas. To date different methods are used for MYCN status detection. The primary aim of our study was to provide a critical assessment of MYCN status using 2 molecular techniques CISH and MLPA. We also focused on the correlation between neuroblastoma genetic markers and patient's clinical course among 15 Tunisian patients. we developed a descriptive study that includes 15 pediatric Tunisian patients referred to our laboratory from 2004 to 2011. We reported the analysis of fresh and FFPE NB tumors tissues. No significant correlation was found between COG grade and patients overall survival. Assessment of NMYC gene copy number by kappa statistic test revealed high concordance between CISH and MLPA tests (kappa coefficient = 0.02). Despite misdiagnosing of MYCN status fewer than 5 copies, MLPA remains an effective molecular technique that enables a large panel of genomic aberrations screening. Thus combining CISH and MLPA is an effective molecular approach adopted in our laboratory. Our results allow pediatric oncologists to set up the first Neuroblastoma therapeutic strategy based on molecular markers in Tunisia.

  20. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  1. Combination approaches with immune checkpoint blockade in cancer therapy

    Directory of Open Access Journals (Sweden)

    Maarten Swart

    2016-11-01

    Full Text Available In healthy individuals, immune checkpoint molecules prevent autoimmune responses and limit immune cell-mediated tissue damage. Tumors frequently exploit these molecules to evade eradication by the immune system. Over the past years, immune checkpoint blockade of cytotoxic T lymphocyte antigen-4 (CTLA-4 and programmed death-1 (PD-1 emerged as promising strategies to activate anti-tumor cytotoxic T cell responses. Although complete regression and long-term survival is achieved in some patients, not all patients respond. This review describes promising, novel combination approaches involving immune checkpoint blockade, aimed at increasing response-rates to the single treatments.

  2. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    Science.gov (United States)

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  3. Towards Multi-Method Research Approach in Empirical Software Engineering

    Science.gov (United States)

    Mandić, Vladimir; Markkula, Jouni; Oivo, Markku

    This paper presents results of a literature analysis on Empirical Research Approaches in Software Engineering (SE). The analysis explores reasons why traditional methods, such as statistical hypothesis testing and experiment replication are weakly utilized in the field of SE. It appears that basic assumptions and preconditions of the traditional methods are contradicting the actual situation in the SE. Furthermore, we have identified main issues that should be considered by the researcher when selecting the research approach. In virtue of reasons for weak utilization of traditional methods we propose stronger use of Multi-Method approach with Pragmatism as the philosophical standpoint.

  4. The Usefulness of Qualitative and Quantitative Approaches and Methods in Researching Problem-Solving Ability in Science Education Curriculum

    Science.gov (United States)

    Eyisi, Daniel

    2016-01-01

    Research in science education is to discover the truth which involves the combination of reasoning and experiences. In order to find out appropriate teaching methods that are necessary for teaching science students problem-solving skills, different research approaches are used by educational researchers based on the data collection and analysis…

  5. A Combined Syntactical and Statistical Approach for R Peak Detection in Real-Time Long-Term Heart Rate Variability Analysis

    Directory of Open Access Journals (Sweden)

    David Pang

    2018-06-01

    Full Text Available Long-term heart rate variability (HRV analysis is useful as a noninvasive technique for autonomic nervous system activity assessment. It provides a method for assessing many physiological and pathological factors that modulate the normal heartbeat. The performance of HRV analysis systems heavily depends on a reliable and accurate detection of the R peak of the QRS complex. Ectopic beats caused by misdetection or arrhythmic events can introduce bias into HRV results, resulting in significant problems in their interpretation. This study presents a novel method for long-term detection of normal R peaks (which represent the normal heartbeat in electrocardiographic signals, intended specifically for HRV analysis. The very low computational complexity of the proposed method, which combines and exploits the advantages of syntactical and statistical approaches, enables real-time applications. The approach was validated using the Massachusetts Institute of Technology–Beth Israel Hospital Normal Sinus Rhythm and the Fantasia database, and has a sensitivity, positive predictivity, detection error rate, and accuracy of 99.998, 99.999, 0.003, and 99.996%, respectively.

  6. A new approach for heparin standardization: combination of scanning UV spectroscopy, nuclear magnetic resonance and principal component analysis.

    Directory of Open Access Journals (Sweden)

    Marcelo A Lima

    Full Text Available The year 2007 was marked by widespread adverse clinical responses to heparin use, leading to a global recall of potentially affected heparin batches in 2008. Several analytical methods have since been developed to detect impurities in heparin preparations; however, many are costly and dependent on instrumentation with only limited accessibility. A method based on a simple UV-scanning assay, combined with principal component analysis (PCA, was developed to detect impurities, such as glycosaminoglycans, other complex polysaccharides and aromatic compounds, in heparin preparations. Results were confirmed by NMR spectroscopy. This approach provides an additional, sensitive tool to determine heparin purity and safety, even when NMR spectroscopy failed, requiring only standard laboratory equipment and computing facilities.

  7. A Ranking Analysis/An Interlinking Approach of New Triangular Fuzzy Cognitive Maps and Combined Effective Time Dependent Matrix

    Science.gov (United States)

    Adiga, Shreemathi; Saraswathi, A.; Praveen Prakash, A.

    2018-04-01

    This paper aims an interlinking approach of new Triangular Fuzzy Cognitive Maps (TrFCM) and Combined Effective Time Dependent (CETD) matrix to find the ranking of the problems of Transgenders. Section one begins with an introduction that briefly describes the scope of Triangular Fuzzy Cognitive Maps (TrFCM) and CETD Matrix. Section two provides the process of causes of problems faced by Transgenders using Fuzzy Triangular Fuzzy Cognitive Maps (TrFCM) method and performs the calculations using the collected data among the Transgender. In Section 3, the reasons for the main causes for the problems of the Transgenders. Section 4 describes the Charles Spearmans coefficients of rank correlation method by interlinking of Triangular Fuzzy Cognitive Maps (TrFCM) Method and CETD Matrix. Section 5 shows the results based on our study.

  8. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-01-01

    Background Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Methods Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. Results A ‘Rich Picture’ was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. Conclusions When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further

  9. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    Science.gov (United States)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  10. Approaches to greenhouse gas accounting methods for biomass carbon

    International Nuclear Information System (INIS)

    Downie, Adriana; Lau, David; Cowie, Annette; Munroe, Paul

    2014-01-01

    This investigation examines different approaches for the GHG flux accounting of activities within a tight boundary of biomass C cycling, with scope limited to exclude all other aspects of the lifecycle. Alternative approaches are examined that a) account for all emissions including biogenic CO 2 cycling – the biogenic method; b) account for the quantity of C that is moved to and maintained in the non-atmospheric pool – the stock method; and c) assume that the net balance of C taken up by biomass is neutral over the short-term and hence there is no requirement to include this C in the calculation – the simplified method. This investigation demonstrates the inaccuracies in both emissions forecasting and abatement calculations that result from the use of the simplified method, which is commonly accepted for use. It has been found that the stock method is the most accurate and appropriate approach for use in calculating GHG inventories, however short-comings of this approach emerge when applied to abatement projects, as it does not account for the increase in biogenic CO 2 emissions that are generated when non-CO 2 GHG emissions in the business-as-usual case are offset. Therefore the biogenic method or a modified version of the stock method should be used to accurately estimate GHG emissions abatement achieved by a project. This investigation uses both the derivation of methodology equations from first principles and worked examples to explore the fundamental differences in the alternative approaches. Examples are developed for three project scenarios including; landfill, combustion and slow-pyrolysis (biochar) of biomass. -- Highlights: • Different approaches can be taken to account for the GHG emissions from biomass. • Simplification of GHG accounting methods is useful, however, can lead to inaccuracies. • Approaches used currently are often inadequate for practises that store carbon. • Accounting methods for emissions forecasting can be inadequate for

  11. Reconstructing Regional Ionospheric Electron Density: A Combined Spherical Slepian Function and Empirical Orthogonal Function Approach

    Science.gov (United States)

    Farzaneh, Saeed; Forootan, Ehsan

    2018-03-01

    The computerized ionospheric tomography is a method for imaging the Earth's ionosphere using a sounding technique and computing the slant total electron content (STEC) values from data of the global positioning system (GPS). The most common approach for ionospheric tomography is the voxel-based model, in which (1) the ionosphere is divided into voxels, (2) the STEC is then measured along (many) satellite signal paths, and finally (3) an inversion procedure is applied to reconstruct the electron density distribution of the ionosphere. In this study, a computationally efficient approach is introduced, which improves the inversion procedure of step 3. Our proposed method combines the empirical orthogonal function and the spherical Slepian base functions to describe the vertical and horizontal distribution of electron density, respectively. Thus, it can be applied on regional and global case studies. Numerical application is demonstrated using the ground-based GPS data over South America. Our results are validated against ionospheric tomography obtained from the constellation observing system for meteorology, ionosphere, and climate (COSMIC) observations and the global ionosphere map estimated by international centers, as well as by comparison with STEC derived from independent GPS stations. Using the proposed approach, we find that while using 30 GPS measurements in South America, one can achieve comparable accuracy with those from COSMIC data within the reported accuracy (1 × 1011 el/cm3) of the product. Comparisons with real observations of two GPS stations indicate an absolute difference is less than 2 TECU (where 1 total electron content unit, TECU, is 1016 electrons/m2).

  12. Comparison of surface mass balance of ice sheets simulated by positive-degree-day method and energy balance approach

    Directory of Open Access Journals (Sweden)

    E. Bauer

    2017-07-01

    Full Text Available Glacial cycles of the late Quaternary are controlled by the asymmetrically varying mass balance of continental ice sheets in the Northern Hemisphere. Surface mass balance is governed by processes of ablation and accumulation. Here two ablation schemes, the positive-degree-day (PDD method and the surface energy balance (SEB approach, are compared in transient simulations of the last glacial cycle with the Earth system model of intermediate complexity CLIMBER-2. The standard version of the CLIMBER-2 model incorporates the SEB approach and simulates ice volume variations in reasonable agreement with paleoclimate reconstructions during the entire last glacial cycle. Using results from the standard CLIMBER-2 model version, we simulated ablation with the PDD method in offline mode by applying different combinations of three empirical parameters of the PDD scheme. We found that none of the parameter combinations allow us to simulate a surface mass balance of the American and European ice sheets that is similar to that obtained with the standard SEB method. The use of constant values for the empirical PDD parameters led either to too much ablation during the first phase of the last glacial cycle or too little ablation during the final phase. We then substituted the standard SEB scheme in CLIMBER-2 with the PDD scheme and performed a suite of fully interactive (online simulations of the last glacial cycle with different combinations of PDD parameters. The results of these simulations confirmed the results of the offline simulations: no combination of PDD parameters realistically simulates the evolution of the ice sheets during the entire glacial cycle. The use of constant parameter values in the online simulations leads either to a buildup of too much ice volume at the end of glacial cycle or too little ice volume at the beginning. Even when the model correctly simulates global ice volume at the last glacial maximum (21 ka, it is unable to simulate

  13. Realization of the Evristic Combination Methods by Means of Computer Graphics

    Directory of Open Access Journals (Sweden)

    S. A. Novoselov

    2012-01-01

    Full Text Available The paper looks at the ways of enhancing and stimulating the creative activity and initiative of pedagogic students – the prospective specialists called for educating and upbringing socially and professionally competent, originally thinking, versatile personalities. For developing their creative abilities the author recommends introducing the heuristic combination methods, applied for engineering creativity facilitation; associative-synectic technology; and computer graphics tools. The paper contains the comparative analysis of the main heuristic method operations and the computer graphics redactor in creating a visual composition. The examples of implementing the heuristic combination methods are described along with the extracts of the laboratory classes designed for creativity and its motivation developments. The approbation of the given method in the several universities confirms the prospects of enhancing the students’ learning and creative activities. 

  14. Comet Methy-sens and DNMTs transcriptional analysis as a combined approach in epigenotoxicology

    Directory of Open Access Journals (Sweden)

    Alessio Perotti

    2015-05-01

    In conclusion, our data demonstrate that Comet Methy-sens, in combination with the analysis of transcriptional levels of DNA methyl transferases, represents a simple and multifunctional approach to implement biomonitoring studies on epigenotoxicological effects of known and unknown xenobiotics.

  15. The evaluation of student-centredness of teaching and learning: a new mixed-methods approach.

    Science.gov (United States)

    Lemos, Ana R; Sandars, John E; Alves, Palmira; Costa, Manuel J

    2014-08-14

    The aim of the study was to develop and consider the usefulness of a new mixed-methods approach to evaluate the student-centredness of teaching and learning on undergraduate medical courses. An essential paradigm for the evaluation was the coherence between how teachers conceptualise their practice (espoused theories) and their actual practice (theories-in-use). The context was a module within an integrated basic sciences course in an undergraduate medical degree programme. The programme had an explicit intention of providing a student-centred curriculum. A content analysis framework based on Weimer's dimensions of student-centred teaching was used to analyze data collected from individual interviews with seven teachers to identify espoused theories and 34h of classroom observations and one student focus group to identify theories-in-use. The interviewees were identified by purposeful sampling. The findings from the three methods were triangulated to evaluate the student-centredness of teaching and learning on the course. Different, but complementary, perspectives of the student-centredness of teaching and learning were identified by each method. The triangulation of the findings revealed coherence between the teachers' espoused theories and theories-in-use. A mixed-methods approach that combined classroom observations with interviews from a purposeful sample of teachers and students offered a useful evaluation of the extent of student-centredness of teaching and learning of this basic science course. Our case study suggests that this new approach is applicable to other courses in medical education.

  16. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  17. Socratic Method as an Approach to Teaching

    Directory of Open Access Journals (Sweden)

    Haris Delić

    2016-10-01

    Full Text Available In this article we presented the theoretical view of Socrates' life and his method in teaching. After the biographical facts of Socrates and his life, we explained the method he used in teaching and the two main types of his method, Classic and Modern Socratic Method. Since the core of Socrates' approach is the dialogue as a form of teaching we explained how exactly the Socratic dialogue goes. Besides that, we presented two examples of dialogues that Socrates led, Meno and Gorgias. Socratic circle is also one of the aspects that we presented in this paper. It is the form of seminars that is crucial for group discussions of a given theme. At the end, some disadvantages of the Method are explained. With this paper, the reader can get the conception of this approach of teaching and can use Socrates as an example of how successfull teacher leads his students towards the goal.

  18. A combined rheology and time domain NMR approach for determining water distributions in protein blends

    NARCIS (Netherlands)

    Dekkers, Birgit L.; Kort, de Daan W.; Grabowska, Katarzyna J.; Tian, Bei; As, Van Henk; Goot, van der Atze Jan

    2016-01-01

    We present a combined time domain NMR and rheology approach to quantify the water distribution in a phase separated protein blend. The approach forms the basis for a new tool to assess the microstructural properties of phase separated biopolymer blends, making it highly relevant for many food and

  19. Parallelised Krylov subspace method for reactor kinetics by IQS approach

    International Nuclear Information System (INIS)

    Gupta, Anurag; Modak, R.S.; Gupta, H.P.; Kumar, Vinod; Bhatt, K.

    2005-01-01

    Nuclear reactor kinetics involves numerical solution of space-time-dependent multi-group neutron diffusion equation. Two distinct approaches exist for this purpose: the direct (implicit time differencing) approach and the improved quasi-static (IQS) approach. Both the approaches need solution of static space-energy-dependent diffusion equations at successive time-steps; the step being relatively smaller for the direct approach. These solutions are usually obtained by Gauss-Seidel type iterative methods. For a faster solution, the Krylov sub-space methods have been tried and also parallelised by many investigators. However, these studies seem to have been done only for the direct approach. In the present paper, parallelised Krylov methods are applied to the IQS approach in addition to the direct approach. It is shown that the speed-up obtained for IQS is higher than that for the direct approach. The reasons for this are also discussed. Thus, the use of IQS approach along with parallelised Krylov solvers seems to be a promising scheme

  20. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    Science.gov (United States)

    Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik

    2015-01-01

    The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.

  1. Fast and accurate resonance assignment of small-to-large proteins by combining automated and manual approaches.

    Directory of Open Access Journals (Sweden)

    Markus Niklasson

    2015-01-01

    Full Text Available The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.

  2. A non-parametric meta-analysis approach for combining independent microarray datasets: application using two microarray datasets pertaining to chronic allograft nephropathy

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2008-02-01

    Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been

  3. A mixed-methods approach to systematic reviews.

    Science.gov (United States)

    Pearson, Alan; White, Heath; Bath-Hextall, Fiona; Salmond, Susan; Apostolo, Joao; Kirkpatrick, Pamela

    2015-09-01

    There are an increasing number of published single-method systematic reviews that focus on different types of evidence related to a particular topic. As policy makers and practitioners seek clear directions for decision-making from systematic reviews, it is likely that it will be increasingly difficult for them to identify 'what to do' if they are required to find and understand a plethora of syntheses related to a particular topic.Mixed-methods systematic reviews are designed to address this issue and have the potential to produce systematic reviews of direct relevance to policy makers and practitioners.On the basis of the recommendations of the Joanna Briggs Institute International Mixed Methods Reviews Methodology Group in 2012, the Institute adopted a segregated approach to mixed-methods synthesis as described by Sandelowski et al., which consists of separate syntheses of each component method of the review. Joanna Briggs Institute's mixed-methods synthesis of the findings of the separate syntheses uses a Bayesian approach to translate the findings of the initial quantitative synthesis into qualitative themes and pooling these with the findings of the initial qualitative synthesis.

  4. Nutrition and culture in professional football. A mixed method approach.

    Science.gov (United States)

    Ono, Mutsumi; Kennedy, Eileen; Reeves, Sue; Cronin, Linda

    2012-02-01

    An adequate diet is essential for the optimal performance of professional football (soccer) players. Existing studies have shown that players fail to consume such a diet, without interrogating the reasons for this. The aim of this study was to explore the difficulties professional football players experience in consuming a diet for optimal performance. It utilized a mixed method approach, combining nutritional intake assessment with qualitative interviews, to ascertain both what was consumed and the wider cultural factors that affect consumption. The study found a high variability in individual intake which ranged widely from 2648 to 4606 kcal/day. In addition, the intake of carbohydrate was significantly lower than that recommended. The study revealed that the main food choices for carbohydrate and protein intake were pasta and chicken respectively. Interview results showed the importance of tradition within the world of professional football in structuring the players' approach to nutrition. In addition, the players' personal eating habits that derived from their class and national habitus restricted their food choice by conflicting with the dietary choices promoted within the professional football clubs. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Promising ethical arguments for product differentiation in the organic food sector. A mixed methods research approach

    OpenAIRE

    Zander, Katrin; Stolz, Hanna; Hamm, Ulrich

    2013-01-01

    Ethical consumerism is a growing trend worldwide. Ethical consumers’ expectations are increasing and neither the Fairtrade nor the organic farming concept covers all the ethical concerns of consumers. Against this background the aim of this research is to elicit consumers’ preferences regarding organic food with additional ethical attributes and their relevance at the market place. A mixed methods research approach was applied by combining an Information Display Matrix, Focus Group Discuss...

  6. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  7. Methods of counting ribs on chest CT: the modified sternomanubrial approach

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Kyung Sik; Kim, Sung Jin; Jeon, Min Hee; Lee, Seung Young; Bae, Il Hun [Chungbuk National University, Cheongju (Korea, Republic of)

    2007-08-15

    The purpose of this study was to evaluate the accuracy of each method of counting ribs on chest CT and to propose a new method: the anterior approach with using the sternocostal joints. CT scans of 38 rib lesions of 27 patients were analyzed (fracture: 25, metastasis: 11, benign bone disease: 2). Each lesion was independently counted by three radiologists with using three different methods for counting ribs: the sternoclavicular approach, the xiphisternal approach and the modified sternomanubrial approach. The rib lesions were divided into three parts of evaluation of each method according to the location of the lesion as follows: the upper part (between the first and fourth thoracic vertebra), the middle part (between the fifth and eighth) and the lower part (between the ninth and twelfth). The most accurate method was a modified sternomanubrial approach (99.1%). The accuracies of a xiphisternal approach and a sternoclavicular approach were 95.6% and 88.6%, respectively. A modified sternomanubrial approach showed the highest accuracies in all three parts (100%, 100% and 97.9%, respectively). We propose a new method for counting ribs, the modified sternomanubrial approach, which was more accurate than the known methods in any parts of the bony thorax, and it may be an easier and quicker method than the others in clinical practice.

  8. Mixed-methods approaches in health research in Nepal

    OpenAIRE

    Simkhada, Padam; Van Teijlingen, Edwin; Wasti, Sharada Prasad; Sathian, B.

    2014-01-01

    Combining and integrating a mixture of qualitative and quantitative methods in one single study is widely used in health and social care research in high-income countries. This editorial adds a few words of advice to the novice mixed-methods researcher in Nepal.

  9. Fracture Failure of Reinforced Concrete Slabs Subjected to Blast Loading Using the Combined Finite-Discrete Element Method

    Directory of Open Access Journals (Sweden)

    Z. M. Jaini

    Full Text Available Abstract Numerical modeling of fracture failure is challenging due to various issues in the constitutive law and the transition of continuum to discrete bodies. Therefore, this study presents the application of the combined finite-discrete element method to investigate the fracture failure of reinforced concrete slabs subjected to blast loading. In numerical modeling, the interaction of non-uniform blast loading on the concrete slab was modeled using the incorporation of the finite element method with a crack rotating approach and the discrete element method to model crack, fracture onset and its post-failures. A time varying pressure-time history based on the mapping method was adopted to define blast loading. The Mohr-Coulomb with Rankine cut-off and von-Mises criteria were applied for concrete and steel reinforcement respectively. The results of scabbing, spalling and fracture show a reliable prediction of damage and fracture.

  10. Combined use of nanocarriers and physical methods for percutaneous penetration enhancement.

    Science.gov (United States)

    Dragicevic, Nina; Maibach, Howard

    2018-02-06

    Dermal and transdermal drug delivery (due to its non-invasiveness, avoidance of the first-pass metabolism, controlling the rate of drug input over a prolonged time, etc.) have gained significant acceptance. Several methods are employed to overcome the permeability barrier of the skin, improving drug penetration into/through skin. Among chemical penetration enhancement methods, nanocarriers have been extensively studied. When applied alone, nanocarriers mostly deliver drugs to skin and can be used to treat skin diseases. To achieve effective transdermal drug delivery, nanocarriers should be applied with physical methods, as they act synergistically in enhancing drug penetration. This review describes combined use of frequently used nanocarriers (liposomes, novel elastic vesicles, lipid-based and polymer-based nanoparticles and dendrimers) with the most efficient physical methods (microneedles, iontophoresis, ultrasound and electroporation) and demonstrates superiority of the combined use of nanocarriers and physical methods in drug penetration enhancement compared to their single use. Copyright © 2018. Published by Elsevier B.V.

  11. Combined transnasal and transoral endoscopic approach to a transsphenoidal encephalocele in an infant.

    Science.gov (United States)

    Tan, Sien Hui; Mun, Kein Seong; Chandran, Patricia Ann; Manuel, Anura Michelle; Prepageran, Narayanan; Waran, Vicknes; Ganesan, Dharmendra

    2015-07-01

    This paper reports an unusual case of a transsphenoidal encephalocele and discusses our experience with a minimally invasive management. To the best of our knowledge, we present the first case of a combined endoscopic transnasal and transoral approach to a transsphenoidal encephalocele in an infant. A 17-day-old boy, who was referred for further assessment of upper airway obstruction, presented with respiratory distress and feeding difficulties. Bronchoscopy and imaging revealed a transsphenoidal encephalocele. At the age of 48 days, he underwent a combined endoscopic transnasal and transoral excision of the nasal component of the encephalocele. This approach, with the aid of neuronavigation, allows good demarcation of the extra-cranial neck of the transsphenoidal encephalocele. We were able to cauterize and carefully dissect the sac prior to excision. The defect of the neck was clearly visualized, and Valsalva manoeuvre was performed to exclude any CSF leak. As the defect was small, it was allowed to heal by secondary intention. The patient's recovery was uneventful, and he tolerated full feeds orally on day 2. Postoperative imaging demonstrated no evidence of recurrence of the nasal encephalocele. Endoscopic follow-up showed good healing of the mucosa and no cerebrospinal fluid leak. The surgical management of transsphenoidal encephalocele in neonates and infants is challenging. We describe a safe technique with low morbidity in managing such a condition. The combined endoscopic transnasal and transoral approach with neuronavigation is a minimally invasive, safe and feasible alternative, even for children below 1 year of age.

  12. Determination of lifetimes of nuclear excited states using the Recoil Distance Doppler Shift Method in combination with magnetic spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    Doncel, M. [Universidad de Salamanca, Laboratorio de Radiaciones Ionizantes, Salamanca (Spain); Royal Institute of Technology, Department of Physics, Stockholm (Sweden); University of Liverpool, Department of Physics, Oliver Lodge Laboratory, Liverpool (United Kingdom); Gadea, A. [CSIC-University of Valencia, Istituto de Fisica Corpuscular, Valencia (Spain); Valiente-Dobon, J.J. [INFN, Laboratori Nazionali di Legnaro, Legnaro (Italy); Quintana, B. [Universidad de Salamanca, Laboratorio de Radiaciones Ionizantes, Salamanca (Spain); Modamio, V. [INFN, Laboratori Nazionali di Legnaro, Legnaro (Italy); University of Oslo, Oslo (Norway); Mengoni, D. [Dipartimento di Fisica e Astronomia, Universita di Padova, Padova (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Padova, Padova (Italy); Moeller, O.; Pietralla, N. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); Dewald, A. [Institut fuer Kernphysik, Universitaet Koeln (Germany)

    2017-10-15

    The current work presents the determination of lifetimes of nuclear excited states using the Recoil Distance Doppler Shift Method, in combination with spectrometers for ion identification, normalizing the intensity of the peaks by the ions detected in the spectrometer as a valid technique that produces results comparable to the ones obtained by the conventional shifted-to-unshifted peak ratio method. The technique has been validated using data measured with the γ-ray array AGATA, the PRISMA spectrometer and the Cologne plunger setup. In this paper a test performed with the AGATA-PRISMA setup at LNL and the advantages of this new approach with respect to the conventional Recoil Distance Doppler Shift Method are discussed. (orig.)

  13. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods

    Directory of Open Access Journals (Sweden)

    Pontil Massimiliano

    2009-10-01

    Full Text Available Abstract Background Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (ΔΔG measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots" at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. Results We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which ΔΔG ≥ 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. Conclusion We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been

  14. Low-Energy Charge Transfer in Multiply-Charged Ion-Atom Collisions Studied with the Combined SCVB-MOCC Approach

    Directory of Open Access Journals (Sweden)

    B. Zygelman

    2002-03-01

    Full Text Available A survey of theoretical studies of charge transfer involving collisions of multiply-charged ions with atomic neutrals (H and He is presented. The calculations utilized the quantum-mechanical molecular-orbital close-coupling (MOCC approach where the requisite potential curves and coupling matrix elements have been obtained with the spin-coupled valence bond (SCVB method. Comparison is made among various collision partners, for equicharged systems, where it is illustrated that even for total charge transfer cross sections, scaling-laws do not exist for low-energy collisions (i.e. < 1 keV/amu. While various empirical scaling-laws are well known in the intermediateand high-energy regimes, the multi-electron configurations of the projectile ions results in a rich and varied low-energy dependence, requiring an explicit calculation for each collision-partner pair. Future charge transfer problems to be addressed with the combined SCVB-MOCC approach are briefly discussed.

  15. Exploring viral reservoir: The combining approach of cell sorting and droplet digital PCR.

    Science.gov (United States)

    Gibellini, Lara; Pecorini, Simone; De Biasi, Sara; Pinti, Marcello; Bianchini, Elena; De Gaetano, Anna; Digaetano, Margherita; Pullano, Rosalberta; Lo Tartaro, Domenico; Iannone, Anna; Mussini, Cristina; Cossarizza, Andrea; Nasi, Milena

    2018-02-01

    Combined antiretroviral therapy (cART) blocks different steps of HIV replication and maintains plasma viral RNA at undetectable levels. The virus can remain in long-living cells and create a reservoir where HIV can restart replicating after cART discontinuation. A persistent viral production triggers and maintains a persistent immune activation, which is a well-known feature of chronic HIV infection, and contributes either to precocious aging, or to the increased incidence of morbidity and mortality of HIV positive patients. The new frontier of the treatment of HIV infection is nowadays eradication of the virus from all host cells and tissues. For this reason, it is crucial to have a clear and precise idea of where the virus hides, and which are the cells that keep it silent. Important efforts have been made to improve the detection of viral reservoirs, and new techniques are now giving the opportunity to characterize viral reservoirs. Among these techniques, a strategic approach based upon cell sorting and droplet digital PCR (ddPCR) is opening new horizons and opportunities of research. This review provides an overview of the methods that combine cell sorting and ddPCR for the quantification of HIV DNA in different cell types, and for the detection of its maintenance. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. A combination of the acoustic radiosity and the image source method

    DEFF Research Database (Denmark)

    Koutsouris, Georgios I.; Brunskog, Jonas; Jeong, Cheol-Ho

    2012-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  17. [Efficiency of combined methods of hemorroid treatment using hal-rar and laser destruction].

    Science.gov (United States)

    Rodoman, G V; Kornev, L V; Shalaeva, T I; Malushenko, R N

    2017-01-01

    To develop the combined method of treatment of hemorrhoids with arterial ligation under Doppler control and laser destruction of internal and external hemorrhoids. The study included 100 patients with chronic hemorrhoids stage II and III. Combined method of HAL-laser was used in study group, HAL RAR-technique in control group 1 and closed hemorrhoidectomy with linear stapler in control group 2. Сomparative evaluation of results in both groups was performed. Combined method overcomes the drawbacks of traditional surgical treatment and limitations in external components elimination which are inherent for HAL-RAR. Moreover, it has a higher efficiency in treating of hemorrhoids stage II-III compared with HAL-RAR and is equally safe and well tolerable for patients. This method does not increase the risk of recurrence, reduces incidence of complications and time of disability.

  18. The Application of a Multiphase Triangulation Approach to Mixed Methods: The Research of an Aspiring School Principal Development Program

    Science.gov (United States)

    Youngs, Howard; Piggot-Irvine, Eileen

    2012-01-01

    Mixed methods research has emerged as a credible alternative to unitary research approaches. The authors show how a combination of a triangulation convergence model with a triangulation multilevel model was used to research an aspiring school principal development pilot program. The multilevel model is used to show the national and regional levels…

  19. Cooperative method development

    DEFF Research Database (Denmark)

    Dittrich, Yvonne; Rönkkö, Kari; Eriksson, Jeanette

    2008-01-01

    The development of methods tools and process improvements is best to be based on the understanding of the development practice to be supported. Qualitative research has been proposed as a method for understanding the social and cooperative aspects of software development. However, qualitative...... research is not easily combined with the improvement orientation of an engineering discipline. During the last 6 years, we have applied an approach we call `cooperative method development', which combines qualitative social science fieldwork, with problem-oriented method, technique and process improvement....... The action research based approach focusing on shop floor software development practices allows an understanding of how contextual contingencies influence the deployment and applicability of methods, processes and techniques. This article summarizes the experiences and discusses the further development...

  20. Analysis Method of Combine Harvesters Technical Level by Functional and Structural Parameters

    Directory of Open Access Journals (Sweden)

    E. V. Zhalnin

    2018-01-01

    Full Text Available The analysis of modern methods of evaluation of the grain harvesters technical level revealed a discrepancy in various criteria: comparative parameters, dimensionless series, the names of firms, the power of the motor, the width of the capture of the harvester, the capacity at the location of the manufacturer plant, advertising brands. (Purpose of research This led to a variety in the name of harvester models, which significantly complicates the assessment of their technical level, complicates the choice of agricultural necessary to him fashion, does not give the perception of the continuity of the change of generations of combines, makes it impossible to analyze trends in their development, does not disclose the technological essence of a model, but - most importantly - combines can not be compared with each other. The figures in the name of the harvester model are not related functionally to the main parameters and performance capabilities. (Materials and methods The close correlation in the form of a linear equation between their design parameters and the capacity of combines was revealed. Verification of this equation in the process of operation of the combine showed that it statistically stable and the estimates are always within the confidence interval with an error of 5-8 percent. It was found that four parameters of the variety of factors, that affect the performance of the harvester per hour net time, having most close correlation with it are: the motor power and the square of the separation concave, straw walkers and sieves for cleaning. (Results and discussion On the basis of the revealed correlation dependence we proposed a new method of assessment of the technical level of combines, which is based on the throughput (kg/s of the wetted material and the size series, indicating the nominal productivity of the combine in centners of grain harvested in 1 hour of basic time. The methodological background and mathematical apparatus

  1. An approach for upgrading biomass and pyrolysis product quality using a combination of aqueous phase bio-oil washing and torrefaction pretreatment.

    Science.gov (United States)

    Chen, Dengyu; Cen, Kehui; Jing, Xichun; Gao, Jinghui; Li, Chen; Ma, Zhongqing

    2017-06-01

    Bio-oil undergoes phase separation because of poor stability. Practical application of aqueous phase bio-oil is challenging. In this study, a novel approach that combines aqueous phase bio-oil washing and torrefaction pretreatment was used to upgrade the biomass and pyrolysis product quality. The effects of individual and combined pretreatments on cotton stalk pyrolysis were studied using TG-FTIR and a fixed bed reactor. The results showed that the aqueous phase bio-oil washing pretreatment removed metals and resolved the two pyrolysis peaks in the DTG curve. Importantly, it increased the bio-oil yield and improved the pyrolysis product quality. For example, the water and acid content of bio-oil decreased significantly along with an increase in phenol formation, and the heating value of non-condensable gases improved, and these were more pronounced when combined with torrefaction pretreatment. Therefore, the combined pretreatment is a promising method, which would contribute to the development of polygeneration pyrolysis technology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Total System Performance Assessment-License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  3. Effects Of Combinations Of Patternmaking Methods And Dress Forms On Garment Appearance

    Directory of Open Access Journals (Sweden)

    Fujii Chinami

    2017-09-01

    Full Text Available We investigated the effects of the combinations of patternmaking methods and dress forms on the appearance of a garment. Six upper garments were made using three patternmaking methods used in France, Italy, and Japan, and two dress forms made in Japan and France. The patterns and the appearances of the garments were compared using geometrical measurements. Sensory evaluations of the differences in garment appearance and fit on each dress form were also carried out. In the patterns, the positions of bust and waist darts were different. The waist dart length, bust dart length, and positions of the bust top were different depending on the patternmaking method, even when the same dress form was used. This was a result of differences in the measurements used and the calculation methods employed for other dimensions. This was because the ideal body shape was different for each patternmaking method. Even for garments produced for the same dress form, the appearances of the shoulder, bust, and waist from the front, side, and back views were different depending on the patternmaking method. As a result of the sensory evaluation, it was also found that the bust and waist shapes of the garments were different depending on the combination of patternmaking method and dress form. Therefore, to obtain a garment with better appearance, it is necessary to understand the effects of the combinations of patternmaking methods and body shapes.

  4. A two-stage optimal planning and design method for combined cooling, heat and power microgrid system

    International Nuclear Information System (INIS)

    Guo, Li; Liu, Wenjian; Cai, Jiejin; Hong, Bowen; Wang, Chengshan

    2013-01-01

    Highlights: • A two-stage optimal method is presented for CCHP microgrid system. • Economic and environmental performance are considered as assessment indicators. • Application case demonstrates its good economic and environmental performance. - Abstract: In this paper, a two-stage optimal planning and design method for combined cooling, heat and power (CCHP) microgrid system was presented. The optimal objective was to simultaneously minimize the total net present cost and carbon dioxide emission in life circle. On the first stage, multi-objective genetic algorithm based on non-dominated sorting genetic algorithm-II (NSGA-II) was applied to solve the optimal design problem including the optimization of equipment type and capacity. On the second stage, mixed-integer linear programming (MILP) algorithm was used to solve the optimal dispatch problem. The approach was applied to a typical CCHP microgrid system in a hospital as a case study, and the effectiveness of the proposed method was verified

  5. The Discontinuous Galerkin Finite Element Method for Solving the MEG and the Combined MEG/EEG Forward Problem

    Directory of Open Access Journals (Sweden)

    Maria Carla Piastra

    2018-02-01

    Full Text Available In Electro- (EEG and Magnetoencephalography (MEG, one important requirement of source reconstruction is the forward model. The continuous Galerkin finite element method (CG-FEM has become one of the dominant approaches for solving the forward problem over the last decades. Recently, a discontinuous Galerkin FEM (DG-FEM EEG forward approach has been proposed as an alternative to CG-FEM (Engwer et al., 2017. It was shown that DG-FEM preserves the property of conservation of charge and that it can, in certain situations such as the so-called skull leakages, be superior to the standard CG-FEM approach. In this paper, we developed, implemented, and evaluated two DG-FEM approaches for the MEG forward problem, namely a conservative and a non-conservative one. The subtraction approach was used as source model. The validation and evaluation work was done in statistical investigations in multi-layer homogeneous sphere models, where an analytic solution exists, and in a six-compartment realistically shaped head volume conductor model. In agreement with the theory, the conservative DG-FEM approach was found to be superior to the non-conservative DG-FEM implementation. This approach also showed convergence with increasing resolution of the hexahedral meshes. While in the EEG case, in presence of skull leakages, DG-FEM outperformed CG-FEM, in MEG, DG-FEM achieved similar numerical errors as the CG-FEM approach, i.e., skull leakages do not play a role for the MEG modality. In particular, for the finest mesh resolution of 1 mm sources with a distance of 1.59 mm from the brain-CSF surface, DG-FEM yielded mean topographical errors (relative difference measure, RDM% of 1.5% and mean magnitude errors (MAG% of 0.1% for the magnetic field. However, if the goal is a combined source analysis of EEG and MEG data, then it is highly desirable to employ the same forward model for both EEG and MEG data. Based on these results, we conclude that the newly presented

  6. The Discontinuous Galerkin Finite Element Method for Solving the MEG and the Combined MEG/EEG Forward Problem.

    Science.gov (United States)

    Piastra, Maria Carla; Nüßing, Andreas; Vorwerk, Johannes; Bornfleth, Harald; Oostenveld, Robert; Engwer, Christian; Wolters, Carsten H

    2018-01-01

    In Electro- (EEG) and Magnetoencephalography (MEG), one important requirement of source reconstruction is the forward model. The continuous Galerkin finite element method (CG-FEM) has become one of the dominant approaches for solving the forward problem over the last decades. Recently, a discontinuous Galerkin FEM (DG-FEM) EEG forward approach has been proposed as an alternative to CG-FEM (Engwer et al., 2017). It was shown that DG-FEM preserves the property of conservation of charge and that it can, in certain situations such as the so-called skull leakages , be superior to the standard CG-FEM approach. In this paper, we developed, implemented, and evaluated two DG-FEM approaches for the MEG forward problem, namely a conservative and a non-conservative one. The subtraction approach was used as source model. The validation and evaluation work was done in statistical investigations in multi-layer homogeneous sphere models, where an analytic solution exists, and in a six-compartment realistically shaped head volume conductor model. In agreement with the theory, the conservative DG-FEM approach was found to be superior to the non-conservative DG-FEM implementation. This approach also showed convergence with increasing resolution of the hexahedral meshes. While in the EEG case, in presence of skull leakages, DG-FEM outperformed CG-FEM, in MEG, DG-FEM achieved similar numerical errors as the CG-FEM approach, i.e., skull leakages do not play a role for the MEG modality. In particular, for the finest mesh resolution of 1 mm sources with a distance of 1.59 mm from the brain-CSF surface, DG-FEM yielded mean topographical errors (relative difference measure, RDM%) of 1.5% and mean magnitude errors (MAG%) of 0.1% for the magnetic field. However, if the goal is a combined source analysis of EEG and MEG data, then it is highly desirable to employ the same forward model for both EEG and MEG data. Based on these results, we conclude that the newly presented conservative DG

  7. Combined supra-transorbital keyhole approach for treatment of delayed intraorbital encephalocele: A minimally invasive approach for an unusual complication of decompressive craniectomy

    Science.gov (United States)

    di Somma, Lucia; Iacoangeli, Maurizio; Nasi, Davide; Balercia, Paolo; Lupi, Ettore; Girotto, Riccardo; Polonara, Gabriele; Scerrati, Massimo

    2016-01-01

    Background: Intraorbital encephalocele is a rare entity characterized by the herniation of cerebral tissue inside the orbital cavity through a defect of the orbital roof. In patients who have experienced head trauma, intraorbital encephalocele is usually secondary to orbital roof fracture. Case Description: We describe here a case of a patient who presented an intraorbital encephalocele 2 years after severe traumatic brain injury, treated by decompressive craniectomy and subsequent autologous cranioplasty, without any evidence of orbital roof fracture. The encephalocele removal and the subsequent orbital roof reconstruction were performed by using a modification of the supraorbital keyhole approach, in which we combine an orbital osteotomy with a supraorbital minicraniotomy to facilitate view and access to both the anterior cranial fossa and orbital compartment and to preserve the already osseointegrated autologous cranioplasty. Conclusions: The peculiarities of this case are the orbital encephalocele without an orbital roof traumatic fracture, and the combined minimally invasive approach used to fix both the encephalocele and the orbital roof defect. Delayed intraorbital encephalocele is probably a complication related to an unintentional opening of the orbit during decompressive craniectomy through which the brain herniated following the restoration of physiological intracranial pressure gradients after the bone flap repositioning. The reconstruction of the orbital roof was performed by using a combined supra-transorbital minimally invasive approach aiming at achieving adequate surgical exposure while preserving the autologous cranioplasty, already osteointegrated. To the best of our knowledge, this approach has not been previously used to address intraorbital encephalocele. PMID:26862452

  8. Efficient Homodifunctional Bimolecular Ring-Closure Method for Cyclic Polymers by Combining RAFT and Self-Accelerating Click Reaction.

    Science.gov (United States)

    Qu, Lin; Sun, Peng; Wu, Ying; Zhang, Ke; Liu, Zhengping

    2017-08-01

    An efficient metal-free homodifunctional bimolecular ring-closure method is developed for the formation of cyclic polymers by combining reversible addition-fragmentation chain transfer (RAFT) polymerization and self-accelerating click reaction. In this approach, α,ω-homodifunctional linear polymers with azide terminals are prepared by RAFT polymerization and postmodification of polymer chain end groups. By virtue of sym-dibenzo-1,5-cyclooctadiene-3,7-diyne (DBA) as small linkers, well-defined cyclic polymers are then prepared using the self-accelerating double strain-promoted azide-alkyne click (DSPAAC) reaction to ring-close the azide end-functionalized homodifunctional linear polymer precursors. Due to the self-accelerating property of DSPAAC ring-closing reaction, this novel method eliminates the requirement of equimolar amounts of telechelic polymers and small linkers in traditional bimolecular ring-closure methods. It facilitates this method to efficiently and conveniently produce varied pure cyclic polymers by employing an excess molar amount of DBA small linkers. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Angular approach combined to mechanical model for tool breakage detection by eddy current sensors

    OpenAIRE

    Ritou , Mathieu; Garnier , Sébastien; Furet , Benoît; Hascoët , Jean-Yves

    2014-01-01

    International audience; The paper presents a new complete approach for Tool Condition Monitoring (TCM) in milling. The aim is the early detection of small damages so that catastrophic tool failures are prevented. A versatile in-process monitoring system is introduced for reliability concerns. The tool condition is determined by estimates of the radial eccentricity of the teeth. An adequate criterion is proposed combining mechanical model of milling and angular approach. Then, a new solution i...

  10. Combined object-oriented approach for development of process control systems

    International Nuclear Information System (INIS)

    Antonova, I.; Batchkova, I.

    2013-01-01

    Full text: The traditional approaches for development of software control system in automation an information technology based on a directly code creation are no longer effective and successful enough. The response to these challenges is the Model Driven Engineering (MDE) or its counter part in the field of architectures Model Driven Architecture (MDA). One of the most promising approach supporting MDE and MDA is UML. It does not specify a methodology for software or system design but aims to provide an integrated modeling framework for structural, functional and behavior descriptions. The success of UML in many object-oriented approaches led to an idea of applying UML to design of multi agent systems. The approach proposed in this paper applies modified Harmony methodology and is based on the combined use of UML profile for system engineering, IEC61499 standard and FIPA standard protocols. The benefits of object-oriented paradigm and the models of IEC61499 standard are extended with UML/SysML and FIPA notations. The development phases are illustrated with the UML models of a simple process control system. The main benefits of using the proposed approach can be summarized as: it provides consistency in the syntax and underlying semantics; increases the potential and likelihood of reuse; supports the whole software development life cycle in the field of process control. Including the SysML features, based on extended activity and parametric diagrams, flow ports and items to the proposed approach opens the possibilities for modeling of continuous system and support the development in field of process control. Another advantage, connected to the UML/MARTE profile is the possibility for analysis of the designed system and detailed design of the hardware and software platform of the modeled application. Key words: object-oriented modeling, control system, UML, SysML, IEC 61499

  11. An SPM8-based Approach for Attenuation Correction Combining Segmentation and Non-rigid Template Formation: Application to Simultaneous PET/MR Brain Imaging

    Science.gov (United States)

    Izquierdo-Garcia, David; Hansen, Adam E.; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T.; Chonde, Daniel B.; Catana, Ciprian

    2014-01-01

    We present an approach for head MR-based attenuation correction (MR-AC) based on the Statistical Parametric Mapping (SPM8) software that combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (µ-maps) from MR data in integrated PET/MR scanners. Methods Coregistered anatomical MR and CT images acquired in 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray and white matter, cerebro-spinal fluid, bone and soft tissue, and air), which were then non-rigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomical MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients (LACs) to be used for AC of PET data. The method was validated on sixteen new subjects with brain tumors (N=12) or mild cognitive impairment (N=4) who underwent CT and PET/MR scans. The µ-maps and corresponding reconstructed PET images were compared to those obtained using the gold standard CT-based approach and the Dixon-based method available on the Siemens Biograph mMR scanner. Relative change (RC) images were generated in each case and voxel- and region of interest (ROI)-based analyses were performed. Results The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain LACs (RC=1.38%±4.52%) compared to the gold standard. Similar results (RC=1.86±4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and ROI-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87±5.0% and 2.74±2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0±10.25% and 9.38±4.97%, respectively). Areas closer to skull showed the largest

  12. Combined expert system/neural networks method for process fault diagnosis

    Science.gov (United States)

    Reifman, Jaques; Wei, Thomas Y. C.

    1995-01-01

    A two-level hierarchical approach for process fault diagnosis is an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach.

  13. Combined expert system/neural networks method for process fault diagnosis

    Science.gov (United States)

    Reifman, J.; Wei, T.Y.C.

    1995-08-15

    A two-level hierarchical approach for process fault diagnosis of an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach. 9 figs.

  14. Combining Statistical Methodologies in Water Quality Monitoring in a Hydrological Basin - Space and Time Approaches

    OpenAIRE

    Costa, Marco; A. Manuela Gonçalves

    2012-01-01

    In this work are discussed some statistical approaches that combine multivariate statistical techniques and time series analysis in order to describe and model spatial patterns and temporal evolution by observing hydrological series of water quality variables recorded in time and space. These approaches are illustrated with a data set collected in the River Ave hydrological basin located in the Northwest region of Portugal.

  15. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  16. Total System Performance Assessment - License Application Methods and Approach

    International Nuclear Information System (INIS)

    McNeish, J.

    2003-01-01

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document

  17. From ethnography to the EAST method: a tractable approach for representing distributed cognition in Air Traffic Control.

    Science.gov (United States)

    Walker, Guy H; Stanton, Neville A; Baber, Chris; Wells, Linda; Gibson, Huw; Salmon, Paul; Jenkins, Daniel

    2010-02-01

    Command and control is a generic activity involving the exercise of authority over assigned resources, combined with planning, coordinating and controlling how those resources are used. The challenge for understanding this type of activity is that it is not often amenable to the conventional experimental/methodological approach. Command and control tends to be multi-faceted (so requires more than one method), is made up of interacting socio and technical elements (so requires a systemic approach) and exhibits aggregate behaviours that emerge from these interactions (so requires methods that go beyond reductionism). In these circumstances a distributed cognition approach is highly appropriate yet the existing ethnographic methods make it difficult to apply and, for non-specialist audiences, sometimes difficult to meaningfully interpret. The Event Analysis for Systemic Teamwork method is put forward as a means of working from a distributed cognition perspective but in a way that goes beyond ethnography. A worked example from Air Traffic Control is used to illustrate how the language of social science can be translated into the language of systems analysis. Statement of Relevance: Distributed cognition provides a highly appropriate conceptual response to complex work settings such as Air Traffic Control. This paper deals with how to realise those benefits in practice without recourse to problematic ethnographic techniques.

  18. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    Science.gov (United States)

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods

  19. Multi-criteria approach with linear combination technique and analytical hierarchy process in land evaluation studies

    Directory of Open Access Journals (Sweden)

    Orhan Dengiz

    2018-01-01

    Full Text Available Land evaluation analysis is a prerequisite to achieving optimum utilization of the available land resources. Lack of knowledge on best combination of factors that suit production of yields has contributed to the low production. The aim of this study was to determine the most suitable areas for agricultural uses. For that reasons, in order to determine land suitability classes of the study area, multi-criteria approach was used with linear combination technique and analytical hierarchy process by taking into consideration of some land and soil physico-chemical characteristic such as slope, texture, depth, derange, stoniness, erosion, pH, EC, CaCO3 and organic matter. These data and land mapping unites were taken from digital detailed soil map scaled as 1:5.000. In addition, in order to was produce land suitability map GIS was program used for the study area. This study was carried out at Mahmudiye, Karaamca, Yazılı, Çiçeközü, Orhaniye and Akbıyık villages in Yenişehir district of Bursa province. Total study area is 7059 ha. 6890 ha of total study area has been used as irrigated agriculture, dry farming agriculture, pasture while, 169 ha has been used for non-agricultural activities such as settlement, road water body etc. Average annual temperature and precipitation of the study area are 16.1oC and 1039.5 mm, respectively. Finally after determination of land suitability distribution classes for the study area, it was found that 15.0% of the study area has highly (S1 and moderately (S2 while, 85% of the study area has marginally suitable and unsuitable coded as S3 and N. It was also determined some relation as compared results of linear combination technique with other hierarchy approaches such as Land Use Capability Classification and Suitability Class for Agricultural Use methods.

  20. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    Science.gov (United States)

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  1. Mixed methods research.

    Science.gov (United States)

    Halcomb, Elizabeth; Hickman, Louise

    2015-04-08

    Mixed methods research involves the use of qualitative and quantitative data in a single research project. It represents an alternative methodological approach, combining qualitative and quantitative research approaches, which enables nurse researchers to explore complex phenomena in detail. This article provides a practical overview of mixed methods research and its application in nursing, to guide the novice researcher considering a mixed methods research project.

  2. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings.

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-08-01

    Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. Published by the BMJ Publishing Group

  3. Object Oriented Modeling : A method for combining model and software development

    NARCIS (Netherlands)

    Van Lelyveld, W.

    2010-01-01

    When requirements for a new model cannot be met by available modeling software, new software can be developed for a specific model. Methods for the development of both model and software exist, but a method for combined development has not been found. A compatible way of thinking is required to

  4. A combined modification of Newton`s method for systems of nonlinear equations

    Energy Technology Data Exchange (ETDEWEB)

    Monteiro, M.T.; Fernandes, E.M.G.P. [Universidade do Minho, Braga (Portugal)

    1996-12-31

    To improve the performance of Newton`s method for the solution of systems of nonlinear equations a modification to the Newton iteration is implemented. The modified step is taken as a linear combination of Newton step and steepest descent directions. In the paper we describe how the coefficients of the combination can be generated to make effective use of the two component steps. Numerical results that show the usefulness of the combined modification are presented.

  5. A combined dynamic analysis method for geometrically nonlinear vibration isolators with elastic rings

    Science.gov (United States)

    Hu, Zhan; Zheng, Gangtie

    2016-08-01

    A combined analysis method is developed in the present paper for studying the dynamic properties of a type of geometrically nonlinear vibration isolator, which is composed of push-pull configuration rings. This method combines the geometrically nonlinear theory of curved beams and the Harmonic Balance Method to overcome the difficulty in calculating the vibration and vibration transmissibility under large deformations of the ring structure. Using the proposed method, nonlinear dynamic behaviors of this isolator, such as the lock situation due to the coulomb damping and the usual jump resulting from the nonlinear stiffness, can be investigated. Numerical solutions based on the primary harmonic balance are first verified by direct integration results. Then, the whole procedure of this combined analysis method is demonstrated and validated by slowly sinusoidal sweeping experiments with different amplitudes of the base excitation. Both numerical and experimental results indicate that this type of isolator behaves as a hardening spring with increasing amplitude of the base excitation, which makes it suitable for isolating both steady-state vibrations and transient shocks.

  6. Combinational approach using solid dispersion and semi-solid matrix technology to enhance in vitro dissolution of telmisartan

    Directory of Open Access Journals (Sweden)

    Syed Faisal Ali

    2016-02-01

    Full Text Available The present investigation was focused to formulate semi-solid capsules (SSCs of hydrophobic drug telmisartan (TLMS by encapsulating semi-solid matrix of its solid dispersion (SD in HPMC capsules. The combinational approach was used to reduce the lag time in drug release and improvise its dissolution. SDs of TLMS was prepared using hot fusion method by varying the combinations of Pluronic-F68, Gelucire 50/13 and Plasdone S630. A total of nine batches (SD1-SD9 were characterized for micromeritic properties, in vitro dissolution behavior and surface characterization. SD4 with 52.43% cumulative drug release (CDR in phosphate buffer, pH 7.4, in 120 min, t50% 44.2 min and DE30min 96.76% was selected for the development of semi-solid capsules. Differential scanning calorimetry of SD4 revealed molecular dispersion of TLMS in Pluronic-F68. SD4 was formulated into SSCs using Gelucire 44/14 and PEG 400 as semi-solid components and PEG 6000 as a suspending agent to achieve reduction in lag time for effective drug dissolution. SSC6 showed maximum in vitro drug dissolution 97.49 % in phosphate buffer, pH 7.4 with in 20 min that was almost a three folds reduction in the time required to achieve similar dissolution by SD. Thus, SSCs present an excellent approach to enhance in vitro dissolution as well as to reduce the lag time of dissolution for poorly water soluble drugs especially to those therapeutic classes that are intended for faster onset of action. Developed approach based on HPMC capsules provided a better alternative to target delivery of telmisartan to the vegetarian population.

  7. An integrated lean-methods approach to hospital facilities redesign.

    Science.gov (United States)

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  8. Efficient Discovery of Novel Multicomponent Mixtures for Hydrogen Storage: A Combined Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wolverton, Christopher [Northwestern Univ., Evanston, IL (United States). Dept. of Materials Science and Engineering; Ozolins, Vidvuds [Univ. of California, Los Angeles, CA (United States). Dept. of Materials Science and Engineering; Kung, Harold H. [Northwestern Univ., Evanston, IL (United States). Dept. of Chemical and Biological Engineering; Yang, Jun [Ford Scientific Research Lab., Dearborn, MI (United States); Hwang, Sonjong [California Inst. of Technology (CalTech), Pasadena, CA (United States). Dept. of Chemistry and Chemical Engineering; Shore, Sheldon [The Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry and Biochemistry

    2016-11-28

    The objective of the proposed program is to discover novel mixed hydrides for hydrogen storage, which enable the DOE 2010 system-level goals. Our goal is to find a material that desorbs 8.5 wt.% H2 or more at temperatures below 85°C. The research program will combine first-principles calculations of reaction thermodynamics and kinetics with material and catalyst synthesis, testing, and characterization. We will combine materials from distinct categories (e.g., chemical and complex hydrides) to form novel multicomponent reactions. Systems to be studied include mixtures of complex hydrides and chemical hydrides [e.g. LiNH2+NH3BH3] and nitrogen-hydrogen based borohydrides [e.g. Al(BH4)3(NH3)3]. The 2010 and 2015 FreedomCAR/DOE targets for hydrogen storage systems are very challenging, and cannot be met with existing materials. The vast majority of the work to date has delineated materials into various classes, e.g., complex and metal hydrides, chemical hydrides, and sorbents. However, very recent studies indicate that mixtures of storage materials, particularly mixtures between various classes, hold promise to achieve technological attributes that materials within an individual class cannot reach. Our project involves a systematic, rational approach to designing novel multicomponent mixtures of materials with fast hydrogenation/dehydrogenation kinetics and favorable thermodynamics using a combination of state-of-the-art scientific computing and experimentation. We will use the accurate predictive power of first-principles modeling to understand the thermodynamic and microscopic kinetic processes involved in hydrogen release and uptake and to design new material/catalyst systems with improved properties. Detailed characterization and atomic-scale catalysis experiments will elucidate the effect of dopants and nanoscale catalysts in achieving fast kinetics and reversibility. And

  9. Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method

    Directory of Open Access Journals (Sweden)

    Xuejun Chen

    2014-01-01

    Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.

  10. Total System Performance Assessment - License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2003-12-08

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

  11. Combined method for reducing emission of sulfur dioxide and nitrogen oxides from thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kotler, V.R.; Grachev, S.P.

    1991-11-01

    Discusses the method developed by the Fossil Energy Research Corp. in the USA for combined desulfurization and denitrification of flue gases from coal-fired power plants. The method combines two methods tested on a commercial scale: the dry additive method for suppression of sulfur dioxide and the selective noncatalytic reduction of nitrogen oxides using urea (the NOXOUT process). The following aspects of joint flue gas desulfurization and denitrification are analyzed: flowsheets of the system, chemical reactions and reaction products, laboratory tests of the method and its efficiency, temperature effects on desulfurization and denitrification of flue gases, effects of reagent consumption rates, operating cost, efficiency of the combined method compared to other conventional methods of separate flue gas desulfurization and denitrification, economic aspects of flue gas denitrification and desulfurization. 4 refs.

  12. Determination of carbohydrates in tobacco by pressurized liquid extraction combined with a novel ultrasound-assisted dispersive liquid-liquid microextraction method.

    Science.gov (United States)

    Cai, Kai; Hu, Deyu; Lei, Bo; Zhao, Huina; Pan, Wenjie; Song, Baoan

    2015-07-02

    A novel derivatization-ultrasonic assisted-dispersive liquid-liquid microextraction (UA-DLLME) method for the simultaneous determination of 11 main carbohydrates in tobacco has been developed. The combined method involves pressurized liquid extraction (PLE), derivatization, and UA-DLLME, followed by the analysis of the main carbohydrates with a gas chromatography-flame ionization detector (GC-FID). First, the PLE conditions were optimized using a univariate approach. Then, the derivatization methods were properly compared and optimized. The aldononitrile acetate method combined with the O-methoxyoxime-trimethylsilyl method was used for derivatization. Finally, the critical variables affecting the UA-DLLME extraction efficiency were searched using fractional factorial design (FFD) and further optimized using Doehlert design (DD) of the response surface methodology. The optimum conditions were found to be 44 μL for CHCl3, 2.3 mL for H2O, 11% w/v for NaCl, 5 min for the extraction time and 5 min for the centrifugation time. Under the optimized experimental conditions, the detection limit of the method (LODs) and linear correlation coefficient were found to be in the range of 0.06-0.90 μg mL(-1) and 0.9987-0.9999. The proposed method was successfully employed to analyze three flue-cured tobacco cultivars, among which the main carbohydrate concentrations were found to be very different. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Combined reservoir simulation and seismic technology, a new approach for modeling CHOPS

    Energy Technology Data Exchange (ETDEWEB)

    Aghabarati, H.; Lines, L.; Settari, A. [Calgary Univ., AB (Canada); Dumitrescu, C. [Sensor Geophysical Ltd., Calgary, AB (Canada)

    2008-10-15

    One of the primary recovery schemes for developing heavy oil reservoirs in Canada is cold heavy oil production with sand (CHOPS). With the introduction of progressive cavity pumps, CHOPS can be applied in unconsolidated or weakly consolidated formations. In order to better understand reservoir properties and recovery mechanism, this paper discussed the use of a combined reservoir simulation and seismic technology that were applied for a heavy oil reservoir situated in Saskatchewan, Canada. Using a seismic survey acquired in 1989, the study used geostatistical methods to estimate the initial reservoir porosity. Sand production was then modeled using an erosional velocity approach and the model was run based on oil production. The paper also compared the results of true porosity derived from simulation against the porosity estimated from a second seismic survey acquired in 2001. Last, the extent and the shape of the enhanced permeability region was modelled in order to estimate porosity distribution. It was concluded that the performance of the CHOPS wells depended greatly on the rate of creation of the high permeability zone around the wells. 9 refs., 2 tabs., 18 figs., 1 appendix.

  14. A semi-supervised learning approach to predict synthetic genetic interactions by combining functional and topological properties of functional gene network

    Directory of Open Access Journals (Sweden)

    Han Kyungsook

    2010-06-01

    Full Text Available Abstract Background Genetic interaction profiles are highly informative and helpful for understanding the functional linkages between genes, and therefore have been extensively exploited for annotating gene functions and dissecting specific pathway structures. However, our understanding is rather limited to the relationship between double concurrent perturbation and various higher level phenotypic changes, e.g. those in cells, tissues or organs. Modifier screens, such as synthetic genetic arrays (SGA can help us to understand the phenotype caused by combined gene mutations. Unfortunately, exhaustive tests on all possible combined mutations in any genome are vulnerable to combinatorial explosion and are infeasible either technically or financially. Therefore, an accurate computational approach to predict genetic interaction is highly desirable, and such methods have the potential of alleviating the bottleneck on experiment design. Results In this work, we introduce a computational systems biology approach for the accurate prediction of pairwise synthetic genetic interactions (SGI. First, a high-coverage and high-precision functional gene network (FGN is constructed by integrating protein-protein interaction (PPI, protein complex and gene expression data; then, a graph-based semi-supervised learning (SSL classifier is utilized to identify SGI, where the topological properties of protein pairs in weighted FGN is used as input features of the classifier. We compare the proposed SSL method with the state-of-the-art supervised classifier, the support vector machines (SVM, on a benchmark dataset in S. cerevisiae to validate our method's ability to distinguish synthetic genetic interactions from non-interaction gene pairs. Experimental results show that the proposed method can accurately predict genetic interactions in S. cerevisiae (with a sensitivity of 92% and specificity of 91%. Noticeably, the SSL method is more efficient than SVM, especially for

  15. A combined analytic-numeric approach for some boundary-value problems

    Directory of Open Access Journals (Sweden)

    Mustafa Turkyilmazoglu

    2016-02-01

    Full Text Available A combined analytic-numeric approach is undertaken in the present work for the solution of boundary-value problems in the finite or semi-infinite domains. Equations to be treated arise specifically from the boundary layer analysis of some two and three-dimensional flows in fluid mechanics. The purpose is to find quick but accurate enough solutions. Taylor expansions at either boundary conditions are computed which are next matched to the other asymptotic or exact boundary conditions. The technique is applied to the well-known Blasius as well as Karman flows. Solutions obtained in terms of series compare favorably with the existing ones in the literature.

  16. Meaning and challenges in the practice of multiple therapeutic massage modalities: a combined methods study.

    Science.gov (United States)

    Porcino, Antony J; Boon, Heather S; Page, Stacey A; Verhoef, Marja J

    2011-09-20

    provision is likely unique to each practitioner. These results may be of interest to researchers considering similar practice issues in other professions. The use of a combined-methods design effectively captured this complexity of TMB practice. TMB research needs to consider research approaches that can capture or adapt to the individualized nature of practice.

  17. Partial maxillectomy for ameloblastoma of the maxilla with infratemporal fossa involvement: A combined endoscopic endonasal and transoral approach.

    Science.gov (United States)

    Guha, A; Hart, L; Polachova, H; Chovanec, M; Schalek, P

    2018-02-21

    Ameloblastoma represents the most common epithelial odontogenic tumor. Because of the proximity of the maxillary tumors to the orbit and skull base, it should be managed as radically as possible. Maxillectomy, mainly via the transfacial or transoral approach, represents the most common type of surgical procedure. Drawback of these approaches is limited control of the superiomedial extent of the tumour in the paranasal area. We report the use of a combined endoscopic endonasal and transoral approach to manage maxillary plexiform ameloblastoma in a 48-year-old male patient. A combined endoscopic endonasal and transoral approach enabled the radical removal of tumour with a 1.5cm margin of radiographically intact bone with good control from both intrasinusal and intraoral aspects. Adequate visualization of the extent of the lesion (e.g. orbit, infratemporal fossa, anterior cranial base) had been achieved. Non-complicated healing was achieved. This technique of partial maxillectomy led to very good aesthetic and functional results. No recurrence had been noted during review appointments. The combination of endoscopic endonasal and transoral approach for a partial maxillectomy allows sufficient reduction of the defect, thus eliminating the necessity for reconstruction and reducing the morbidity associated with it. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  18. Bayesian methods for the combination of core sampling data with historical models for tank characterization

    International Nuclear Information System (INIS)

    York, J.C.; Remund, K.M.; Chen, G.; Simpson, B.C.; Brown, T.M.

    1995-07-01

    A wide variety of information is available on the contents of the nuclear waste tanks at the Hanford site. This report describes an attempt to combine several sources of information using a Bayesian statistical approach. This methodology allows the combination of multiple disparate information sources. After each source of information is summarized in terms of a probability distribution function (pdf), Bayes' theorem is applied to combine them. This approach has been applied to characterizing tanks B-110, B-111, and B-201. These tanks were chosen for their simple waste matrices: B-110 and B-111 contain mostly 2C waste, and B-201 contains mostly 224 waste. Additionally,, the results of this analysis axe used to make predictions for tank T-111 (which contains both 2C and 224 waste). These predictions are compared to the estimates based on core samples from tank T-111

  19. Combining density functional and incremental post-Hartree-Fock approaches for van der Waals dominated adsorbate-surface interactions: Ag2/graphene

    International Nuclear Information System (INIS)

    Lara-Castells, María Pilar de; Mitrushchenkov, Alexander O.; Stoll, Hermann

    2015-01-01

    A combined density functional (DFT) and incremental post-Hartree-Fock (post-HF) approach, proven earlier to calculate He-surface potential energy surfaces [de Lara-Castells et al., J. Chem. Phys. 141, 151102 (2014)], is applied to describe the van der Waals dominated Ag 2 /graphene interaction. It extends the dispersionless density functional theory developed by Pernal et al. [Phys. Rev. Lett. 103, 263201 (2009)] by including periodic boundary conditions while the dispersion is parametrized via the method of increments [H. Stoll, J. Chem. Phys. 97, 8449 (1992)]. Starting with the elementary cluster unit of the target surface (benzene), continuing through the realistic cluster model (coronene), and ending with the periodic model of the extended system, modern ab initio methodologies for intermolecular interactions as well as state-of-the-art van der Waals-corrected density functional-based approaches are put together both to assess the accuracy of the composite scheme and to better characterize the Ag 2 /graphene interaction. The present work illustrates how the combination of DFT and post-HF perspectives may be efficient to design simple and reliable ab initio-based schemes in extended systems for surface science applications

  20. An approach for investigation of secure access processes at a combined e-learning environment

    Science.gov (United States)

    Romansky, Radi; Noninska, Irina

    2017-12-01

    The article discuses an approach to investigate processes for regulation the security and privacy control at a heterogenous e-learning environment realized as a combination of traditional and cloud means and tools. Authors' proposal for combined architecture of e-learning system is presented and main subsystems and procedures are discussed. A formalization of the processes for using different types resources (public, private internal and private external) is proposed. The apparatus of Markovian chains (MC) is used for modeling and analytical investigation of the secure access to the resources is used and some assessments are presented.

  1. A multiparameter chaos control method based on OGY approach

    International Nuclear Information System (INIS)

    Souza de Paula, Aline; Amorim Savi, Marcelo

    2009-01-01

    Chaos control is based on the richness of responses of chaotic behavior and may be understood as the use of tiny perturbations for the stabilization of a UPO embedded in a chaotic attractor. Since one of these UPO can provide better performance than others in a particular situation the use of chaos control can make this kind of behavior to be desirable in a variety of applications. The OGY method is a discrete technique that considers small perturbations promoted in the neighborhood of the desired orbit when the trajectory crosses a specific surface, such as a Poincare section. This contribution proposes a multiparameter semi-continuous method based on OGY approach in order to control chaotic behavior. Two different approaches are possible with this method: coupled approach, where all control parameters influences system dynamics although they are not active; and uncoupled approach that is a particular case where control parameters return to the reference value when they become passive parameters. As an application of the general formulation, it is investigated a two-parameter actuation of a nonlinear pendulum control employing coupled and uncoupled approaches. Analyses are carried out considering signals that are generated by numerical integration of the mathematical model using experimentally identified parameters. Results show that the procedure can be a good alternative for chaos control since it provides a more effective UPO stabilization than the classical single-parameter approach.

  2. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  3. Modeling of the inhomogeneity of grain refinement during combined metal forming process by finite element and cellular automata methods

    Energy Technology Data Exchange (ETDEWEB)

    Majta, Janusz; Madej, Łukasz; Svyetlichnyy, Dmytro S.; Perzyński, Konrad; Kwiecień, Marcin, E-mail: mkwiecie@agh.edu.pl; Muszka, Krzysztof

    2016-08-01

    The potential of discrete cellular automata technique to predict the grain refinement in wires produced using combined metal forming process is presented and discussed within the paper. The developed combined metal forming process can be treated as one of the Severe Plastic Deformation (SPD) techniques that consists of three different modes of deformation: asymmetric drawing with bending, namely accumulated angular drawing (AAD), wire drawing (WD) and wire flattening (WF). To accurately replicate complex stress state both at macro and micro scales during subsequent deformations two stage modeling approach was used. First, the Finite Element Method (FEM), implemented in commercial ABAQUS software, was applied to simulate entire combined forming process at the macro scale level. Then, based on FEM results, the Cellular Automata (CA) method was applied for simulation of grain refinement at the microstructure level. Data transferred between FEM and CA methods included set of files with strain tensor components obtained from selected integration points in the macro scale model. As a result of CA simulation, detailed information on microstructure evolution during severe plastic deformation conditions was obtained, namely: changes of shape and sizes of modeled representative volume with imposed microstructure, changes of the number of grains, subgrains and dislocation cells, development of grain boundaries angle distribution as well as changes in the pole figures. To evaluate CA model predictive capabilities, results of computer simulation were compared with scanning electron microscopy and electron back scattered diffraction images (SEM/EBSD) studies of samples after AAD+WD+WF process.

  4. Combinations of Methods for Collaborative Evaluation of the Usability of Interactive Software Systems

    Directory of Open Access Journals (Sweden)

    Andrés Solano

    2016-01-01

    Full Text Available Usability is a fundamental quality characteristic for the success of an interactive system. It is a concept that includes a set of metrics and methods in order to obtain easy-to-learn and easy-to-use systems. Usability Evaluation Methods, UEM, are quite diverse; their application depends on variables such as costs, time availability, and human resources. A large number of UEM can be employed to assess interactive software systems, but questions arise when deciding which method and/or combination of methods gives more (relevant information. We propose Collaborative Usability Evaluation Methods, CUEM, following the principles defined by the Collaboration Engineering. This paper analyzes a set of CUEM conducted on different interactive software systems. It proposes combinations of CUEM that provide more complete and comprehensive information about the usability of interactive software systems than those evaluation methods conducted independently.

  5. Test of the combined method for extracting spectroscopic factors in N =50 nuclei

    Science.gov (United States)

    Walter, David; Cizewski, J. A.; Baugher, T.; Ratkiewicz, A.; Pain, S. D.; Nunes, F. M.; Ahn, S.; Cerizza, G.; Jones, K. L.; Manning, B.; Thornsberry, C.

    2017-09-01

    The single-particle properties of nuclei near shell closures and r-process waiting points can be observed using single-nucleon transfer reactions with beams of rare isotopes. However, approximations have to be made about the final bound state to extract spectroscopic information. An approach to constrain the bound state potential has been proposed by Mukhamedzhanov and Nunes. At peripheral reaction energies ( 5 MeV/u), the ANC for the nucleus can be extracted, and is combined with the same reaction at higher energies ( 40 MeV/u). These combined measurements can constrain the shape of the bound state potential, and the spectroscopic factor can be reliably extracted. To test this method, the 86Kr(d , p) reaction was performed in inverse kinematics with a 35 MeV/u beam at the National Superconducting Cyclotron Laboratory (NSCL) with the ORRUBA and SIDAR arrays of silicon strip detectors coupled to the S800 spectrometer. Successful results supported the measurement of a radioactive ion beam of 84Se at 45 MeV/u at the NSCL to be measured at the end of 2017. Results from the 86Kr(d , p) measurement will be presented as well as preparations for the upcoming 84Se(d , p) measurement. This work is supported in part by the National Science Foundation and U.S. D.O.E.

  6. Estimating Probable Maximum Precipitation by Considering Combined Effect of Typhoon and Southwesterly Air Flow

    Directory of Open Access Journals (Sweden)

    Cheng-Chin Liu

    2016-01-01

    Full Text Available Typhoon Morakot hit southern Taiwan in 2009, bringing 48-hr of heavy rainfall [close to the Probable Maximum Precipitation (PMP] to the Tsengwen Reservoir catchment. This extreme rainfall event resulted from the combined (co-movement effect of two climate systems (i.e., typhoon and southwesterly air flow. Based on the traditional PMP estimation method (i.e., the storm transposition method, STM, two PMP estimation approaches, i.e., Amplification Index (AI and Independent System (IS approaches, which consider the combined effect are proposed in this work. The AI approach assumes that the southwesterly air flow precipitation in a typhoon event could reach its maximum value. The IS approach assumes that the typhoon and southwesterly air flow are independent weather systems. Based on these assumptions, calculation procedures for the two approaches were constructed for a case study on the Tsengwen Reservoir catchment. The results show that the PMP estimates for 6- to 60-hr durations using the two approaches are approximately 30% larger than the PMP estimates using the traditional STM without considering the combined effect. This work is a pioneer PMP estimation method that considers the combined effect of a typhoon and southwesterly air flow. Further studies on this issue are essential and encouraged.

  7. The Majority Wins: a Method for Combining Speaker Diarization Systems

    NARCIS (Netherlands)

    Huijbregts, M.; Leeuwen, D.A. van; Jong, F.M.G. de

    2009-01-01

    In this paper we present a method for combining multiple diarization systems into one single system by applying a majority voting scheme. The voting scheme selects the best segmentation purely on basis of the output of each system. On our development set of NIST Rich Transcription evaluation

  8. A simple network agreement-based approach for combining evidences in a heterogeneous sensor network

    Directory of Open Access Journals (Sweden)

    Raúl Eusebio-Grande

    2015-12-01

    Full Text Available In this research we investigate how the evidences provided by both static and mobile nodes that are part of a heterogenous sensor network can be combined to have trustworthy results. A solution relying on a network agreement-based approach was implemented and tested.

  9. Over-extending reduction combined with unilateral approach percutaneous vertebroplasty for the treatment of vertebral compression fractures due to osteoporosis

    International Nuclear Information System (INIS)

    Wei Xinjian; Ji Xianghui; Cao Fei; Zhang Fuhua

    2012-01-01

    Objective: To assess the clinical effect of over-extending reduction combined with percutaneous vertebroplasty (PVP) in treating vertebral compression fractures caused by osteoporosis. Methods: A total of 16 patients with vertebral compression fractures due to osteoporosis were treated with over-extending reduction by using traction on the operation table, and then PVP through trans-single-pedicular approach was performed on the fractured vertebra. The visual analogue scale (VAS) was used to evaluate the clinical effectiveness. The preoperative and postoperative heights of the fractured vertebral body were determined, and the vertebral height recovery ratio was calculated. Results: Technical success was achieved in 20 vertebrae of 16 cases. Bone cement leakage was observed in front of the vertebral body (n=5), in the side of vertebral body (n=20) and within the intervertebral (n=2). After the treatment VAS score decreased from preoperative 8.5±1.2 to postoperative 2.5±1.4. The vertebral height recovery ratio was (40.1±23.5)%. After the surgery, the VAS score and the vertebral height were significantly improved (P<0.05). Conclusion: The over-extending reduction combined with PVP through trans-single-pedicular approach is an effective treatment for vertebral compression fractures caused by osteoporosis. (authors)

  10. A physics based method for combining multiple anatomy models with application to medical simulation.

    Science.gov (United States)

    Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David

    2009-01-01

    We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.

  11. Approaches and methods for econometric analysis of market power

    DEFF Research Database (Denmark)

    Perekhozhuk, Oleksandr; Glauben, Thomas; Grings, Michael

    2017-01-01

    , functional forms, estimation methods and derived estimates of the degree of market power. Thereafter, we use our framework to evaluate several structural models based on PTA and GIM to measure oligopsony power in the Ukrainian dairy industry. The PTA-based results suggest that the estimated parameters......This study discusses two widely used approaches in the New Empirical Industrial Organization (NEIO) literature and examines the strengths and weaknesses of the Production-Theoretic Approach (PTA) and the General Identification Method (GIM) for the econometric analysis of market power...... in agricultural and food markets. We provide a framework that may help researchers to evaluate and improve structural models of market power. Starting with the specification of the approaches in question, we compare published empirical studies of market power with respect to the choice of the applied approach...

  12. Super-Monte Carla : a combined approach to x-ray beam planning

    International Nuclear Information System (INIS)

    Keall, P.; Hoban, P.

    1996-01-01

    A new accurate 3-D radiotherapy dose calculation algorithm, Super-Monte Carlo (SMC), has been developed which combines elements of both superposition/convolution and Monte Carlo methods. Currently used clinical dose calculation algorithms (except those based on the superposition method) can have errors of over 10%, especially where significant density inhomogeneities exist, such as in the head and neck, and lung regions. Errors of this magnitude can cause significan departures in the tumour control probability of the actual treatment. (author)

  13. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data

    International Nuclear Information System (INIS)

    Behar, Joachim; Oster, Julien; Clifford, Gari D

    2014-01-01

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques. The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected. The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1–5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. (paper)

  14. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Science.gov (United States)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  15. Linear combination of forecasts with numerical adjustment via MINIMAX non-linear programming

    Directory of Open Access Journals (Sweden)

    Jairo Marlon Corrêa

    2016-03-01

    Full Text Available This paper proposes a linear combination of forecasts obtained from three forecasting methods (namely, ARIMA, Exponential Smoothing and Artificial Neural Networks whose adaptive weights are determined via a multi-objective non-linear programming problem, which seeks to minimize, simultaneously, the statistics: MAE, MAPE and MSE. The results achieved by the proposed combination are compared with the traditional approach of linear combinations of forecasts, where the optimum adaptive weights are determined only by minimizing the MSE; with the combination method by arithmetic mean; and with individual methods

  16. A study of combined evaluation of suppliers based on correlation

    Directory of Open Access Journals (Sweden)

    Heting Qiu

    2013-03-01

    Full Text Available Purpose: The Selection of logistics service providers is an important issue in supply chain management. But different evaluation methods may lead to different results, which could cause inconsistent conclusions. This paper makes use of a new perspective to combine with a variety of methods to eliminate the deviation of different single evaluation methods. Design/methodology/approach: This paper expounds the application of the combined evaluation method based on correlation. Entropy method, factor analysis, grey colligation evaluation and AHP have been used for research. Findings: According to the evaluate result, the ranking of suppliers obtained by each method have obvious differences. The result shows that combined evaluation method can eliminate the deviation of different single evaluation methods. Originality/value: The combined evaluation method makes up for the defects of single evaluation methods and obtains a result that is more stable and creditable with smaller deviation. This study can provide the enterprise leaders with more scientific method to select their cooperative companies. 

  17. A Weighted Combination Method for Conflicting Evidence in Multi-Sensor Data Fusion

    Directory of Open Access Journals (Sweden)

    Fuyuan Xiao

    2018-05-01

    Full Text Available Dempster–Shafer evidence theory is widely applied in various fields related to information fusion. However, how to avoid the counter-intuitive results is an open issue when combining highly conflicting pieces of evidence. In order to handle such a problem, a weighted combination method for conflicting pieces of evidence in multi-sensor data fusion is proposed by considering both the interplay between the pieces of evidence and the impacts of the pieces of evidence themselves. First, the degree of credibility of the evidence is determined on the basis of the modified cosine similarity measure of basic probability assignment. Then, the degree of credibility of the evidence is adjusted by leveraging the belief entropy function to measure the information volume of the evidence. Finally, the final weight of each piece of evidence generated from the above steps is obtained and adopted to modify the bodies of evidence before using Dempster’s combination rule. A numerical example is provided to illustrate that the proposed method is reasonable and efficient in handling the conflicting pieces of evidence. In addition, applications in data classification and motor rotor fault diagnosis validate the practicability of the proposed method with better accuracy.

  18. A practical approach for deriving all-weather soil moisture content using combined satellite and meteorological data

    Science.gov (United States)

    Leng, Pei; Li, Zhao-Liang; Duan, Si-Bo; Gao, Mao-Fang; Huo, Hong-Yuan

    2017-09-01

    Soil moisture has long been recognized as one of the essential variables in the water cycle and energy budget between Earth's surface and atmosphere. The present study develops a practical approach for deriving all-weather soil moisture using combined satellite images and gridded meteorological products. In this approach, soil moisture over the Moderate Resolution Imaging Spectroradiometer (MODIS) clear-sky pixels are estimated from the Vegetation Index/Temperature (VIT) trapezoid scheme in which theoretical dry and wet edges were determined pixel to pixel by China Meteorological Administration Land Data Assimilation System (CLDAS) meteorological products, including air temperature, solar radiation, wind speed and specific humidity. For cloudy pixels, soil moisture values are derived by the calculation of surface and aerodynamic resistances from wind speed. The approach is capable of filling the soil moisture gaps over remaining cloudy pixels by traditional optical/thermal infrared methods, allowing for a spatially complete soil moisture map over large areas. Evaluation over agricultural fields indicates that the proposed approach can produce an overall generally reasonable distribution of all-weather soil moisture. An acceptable accuracy between the estimated all-weather soil moisture and in-situ measurements at different depths could be found with an Root Mean Square Error (RMSE) varying from 0.067 m3/m3 to 0.079 m3/m3 and a slight bias ranging from 0.004 m3/m3 to -0.011 m3/m3. The proposed approach reveals significant potential to derive all-weather soil moisture using currently available satellite images and meteorological products at a regional or global scale in future developments.

  19. An SPM8-based approach for attenuation correction combining segmentation and nonrigid template formation: application to simultaneous PET/MR brain imaging.

    Science.gov (United States)

    Izquierdo-Garcia, David; Hansen, Adam E; Förster, Stefan; Benoit, Didier; Schachoff, Sylvia; Fürst, Sebastian; Chen, Kevin T; Chonde, Daniel B; Catana, Ciprian

    2014-11-01

    We present an approach for head MR-based attenuation correction (AC) based on the Statistical Parametric Mapping 8 (SPM8) software, which combines segmentation- and atlas-based features to provide a robust technique to generate attenuation maps (μ maps) from MR data in integrated PET/MR scanners. Coregistered anatomic MR and CT images of 15 glioblastoma subjects were used to generate the templates. The MR images from these subjects were first segmented into 6 tissue classes (gray matter, white matter, cerebrospinal fluid, bone, soft tissue, and air), which were then nonrigidly coregistered using a diffeomorphic approach. A similar procedure was used to coregister the anatomic MR data for a new subject to the template. Finally, the CT-like images obtained by applying the inverse transformations were converted to linear attenuation coefficients to be used for AC of PET data. The method was validated on 16 new subjects with brain tumors (n = 12) or mild cognitive impairment (n = 4) who underwent CT and PET/MR scans. The μ maps and corresponding reconstructed PET images were compared with those obtained using the gold standard CT-based approach and the Dixon-based method available on the Biograph mMR scanner. Relative change (RC) images were generated in each case, and voxel- and region-of-interest-based analyses were performed. The leave-one-out cross-validation analysis of the data from the 15 atlas-generation subjects showed small errors in brain linear attenuation coefficients (RC, 1.38% ± 4.52%) compared with the gold standard. Similar results (RC, 1.86% ± 4.06%) were obtained from the analysis of the atlas-validation datasets. The voxel- and region-of-interest-based analysis of the corresponding reconstructed PET images revealed quantification errors of 3.87% ± 5.0% and 2.74% ± 2.28%, respectively. The Dixon-based method performed substantially worse (the mean RC values were 13.0% ± 10.25% and 9.38% ± 4.97%, respectively). Areas closer to the skull showed

  20. Combined culture-based and culture-independent approaches provide insights into diversity of jakobids, an extremely plesiomorphic eukaryotic lineage

    Directory of Open Access Journals (Sweden)

    Tomáš ePánek

    2015-11-01

    Full Text Available We used culture-based and culture-independent approaches to discover diversity and ecology of anaerobic jakobids (Excavata: Jakobida, an overlooked, deep-branching lineage of free-living nanoflagellates related to Euglenozoa. Jakobids are among a few lineages of nanoflagellates frequently detected in anoxic habitats by PCR-based studies, however only two strains of a single jakobid species have been isolated from those habitats. We recovered 712 environmental sequences and cultured 21 new isolates of anaerobic jakobids that collectively represent at least ten different species in total, from which four are uncultured. Two cultured species have never been detected by environmental, PCR-based methods. Surprisingly, culture-based and culture-independent approaches were able to reveal a relatively high proportion of overall species diversity of anaerobic jakobids - 60 % or 80 %, respectively. Our phylogenetic analyses based on SSU rDNA and six protein-coding genes showed that anaerobic jakobids constitute a clade of morphologically similar, but genetically and ecologically diverse protists – Stygiellidae fam. nov. Our investigation combines culture-based and environmental molecular-based approaches to capture a wider extent of species diversity and shows Stygiellidae as a group that ordinarily inhabits anoxic, sulfide- and ammonium-rich marine habitats worldwide.

  1. Combining flow cytometry and 16S rRNA gene pyrosequencing: A promising approach for drinking water monitoring and characterization

    KAUST Repository

    Prest, Emmanuelle I E C

    2014-10-01

    The combination of flow cytometry (FCM) and 16S rRNA gene pyrosequencing data was investigated for the purpose of monitoring and characterizing microbial changes in drinking water distribution systems. High frequency sampling (5min intervals for 1h) was performed at the outlet of a treatment plant and at one location in the full-scale distribution network. In total, 52 bulk water samples were analysed with FCM, pyrosequencing and conventional methods (adenosine-triphosphate, ATP; heterotrophic plate count, HPC). FCM and pyrosequencing results individually showed that changes in the microbial community occurred in the water distribution system, which was not detected with conventional monitoring. FCM data showed an increase in the total bacterial cell concentrations (from 345±15×103 to 425±35×103cellsmL-1) and in the percentage of intact bacterial cells (from 39±3.5% to 53±4.4%) during water distribution. This shift was also observed in the FCM fluorescence fingerprints, which are characteristic of each water sample. A similar shift was detected in the microbial community composition as characterized with pyrosequencing, showing that FCM and genetic fingerprints are congruent. FCM and pyrosequencing data were subsequently combined for the calculation of cell concentration changes for each bacterial phylum. The results revealed an increase in cell concentrations of specific bacterial phyla (e.g., Proteobacteria), along with a decrease in other phyla (e.g., Actinobacteria), which could not be concluded from the two methods individually. The combination of FCM and pyrosequencing methods is a promising approach for future drinking water quality monitoring and for advanced studies on drinking water distribution pipeline ecology. © 2014 Elsevier Ltd.

  2. Improving the sludge disintegration efficiency of sonication by combining with alkalization and thermal pre-treatment methods.

    Science.gov (United States)

    Şahinkaya, S; Sevimli, M F; Aygün, A

    2012-01-01

    One of the most serious problems encountered in biological wastewater treatment processes is the production of waste activated sludge (WAS). Sonication, which is an energy-intensive process, is the most powerful sludge pre-treatment method. Due to lack of information about the combined pre-treatment methods of sonication, the combined pre-treatment methods were investigated and it was aimed to improve the disintegration efficiency of sonication by combining sonication with alkalization and thermal pre-treatment methods in this study. The process performances were evaluated based on the quantities of increases in soluble chemical oxygen demand (COD), protein and carbohydrate. The releases of soluble COD, carbohydrate and protein by the combined methods were higher than those by sonication, alkalization and thermal pre-treatment alone. Degrees of sludge disintegration in various options of sonication were in the following descending order: sono-alkalization > sono-thermal pre-treatment > sonication. Therefore, it was determined that combining sonication with alkalization significantly improved the sludge disintegration and decreased the required energy to reach the same yield by sonication. In addition, effects on sludge settleability and dewaterability and kinetic mathematical modelling of pre-treatment performances of these methods were investigated. It was proven that the proposed model accurately predicted the efficiencies of ultrasonic pre-treatment methods.

  3. Multiobjective scatter search approach with new combination scheme applied to solve environmental/economic dispatch problem

    International Nuclear Information System (INIS)

    Athayde Costa e Silva, Marsil de; Klein, Carlos Eduardo; Mariani, Viviana Cocco; Santos Coelho, Leandro dos

    2013-01-01

    The environmental/economic dispatch (EED) is an important daily optimization task in the operation of many power systems. It involves the simultaneous optimization of fuel cost and emission objectives which are conflicting ones. The EED problem can be formulated as a large-scale highly constrained nonlinear multiobjective optimization problem. In recent years, many metaheuristic optimization approaches have been reported in the literature to solve the multiobjective EED. In terms of metaheuristics, recently, scatter search approaches are receiving increasing attention, because of their potential to effectively explore a wide range of complex optimization problems. This paper proposes an improved scatter search (ISS) to deal with multiobjective EED problems based on concepts of Pareto dominance and crowding distance and a new scheme for the combination method. In this paper, we have considered the standard IEEE (Institute of Electrical and Electronics Engineers) 30-bus system with 6-generators and the results obtained by proposed ISS algorithm are compared with the other recently reported results in the literature. Simulation results demonstrate that the proposed ISS algorithm is a capable candidate in solving the multiobjective EED problems. - Highlights: ► Economic dispatch. ► We solve the environmental/economic economic power dispatch problem with scatter search. ► Multiobjective scatter search can effectively improve the global search ability

  4. QSPR Models for Predicting Log Pliver Values for Volatile Organic Compounds Combining Statistical Methods and Domain Knowledge

    Directory of Open Access Journals (Sweden)

    Mónica F. Díaz

    2012-12-01

    Full Text Available Volatile organic compounds (VOCs are contained in a variety of chemicals that can be found in household products and may have undesirable effects on health. Thereby, it is important to model blood-to-liver partition coefficients (log Pliver for VOCs in a fast and inexpensive way. In this paper, we present two new quantitative structure-property relationship (QSPR models for the prediction of log Pliver, where we also propose a hybrid approach for the selection of the descriptors. This hybrid methodology combines a machine learning method with a manual selection based on expert knowledge. This allows obtaining a set of descriptors that is interpretable in physicochemical terms. Our regression models were trained using decision trees and neural networks and validated using an external test set. Results show high prediction accuracy compared to previous log Pliver models, and the descriptor selection approach provides a means to get a small set of descriptors that is in agreement with theoretical understanding of the target property.

  5. Is Combining Child Labour and School Education the Right Approach? Investigating the Cambodian Case

    Science.gov (United States)

    Kim, Chae-Young

    2009-01-01

    The paper considers whether letting children combine work and school is a valid and effective approach in Cambodia. Policy makers' suggestions that child labour should be allowed to some extent due to household poverty appear ungrounded as no significant relation between children's work and household poverty is found while arranging school…

  6. Combination of synoptical-analogous and dynamical methods to increase skill score of monthly air temperature forecasts over Northern Eurasia

    Science.gov (United States)

    Khan, Valentina; Tscepelev, Valery; Vilfand, Roman; Kulikova, Irina; Kruglova, Ekaterina; Tischenko, Vladimir

    2016-04-01

    Long-range forecasts at monthly-seasonal time scale are in great demand of socio-economic sectors for exploiting climate-related risks and opportunities. At the same time, the quality of long-range forecasts is not fully responding to user application necessities. Different approaches, including combination of different prognostic models, are used in forecast centers to increase the prediction skill for specific regions and globally. In the present study, two forecasting methods are considered which are exploited in operational practice of Hydrometeorological Center of Russia. One of them is synoptical-analogous method of forecasting of surface air temperature at monthly scale. Another one is dynamical system based on the global semi-Lagrangian model SL-AV, developed in collaboration of Institute of Numerical Mathematics and Hydrometeorological Centre of Russia. The seasonal version of this model has been used to issue global and regional forecasts at monthly-seasonal time scales. This study presents results of the evaluation of surface air temperature forecasts generated with using above mentioned synoptical-statistical and dynamical models, and their combination to potentially increase skill score over Northern Eurasia. The test sample of operational forecasts is encompassing period from 2010 through 2015. The seasonal and interannual variability of skill scores of these methods has been discussed. It was noticed that the quality of all forecasts is highly dependent on the inertia of macro-circulation processes. The skill scores of forecasts are decreasing during significant alterations of synoptical fields for both dynamical and empirical schemes. Procedure of combination of forecasts from different methods, in some cases, has demonstrated its effectiveness. For this study the support has been provided by Grant of Russian Science Foundation (№14-37-00053).

  7. Solution of 3D inverse scattering problems by combined inverse equivalent current and finite element methods

    International Nuclear Information System (INIS)

    Kılıç, Emre; Eibert, Thomas F.

    2015-01-01

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained

  8. Solution of 3D inverse scattering problems by combined inverse equivalent current and finite element methods

    Energy Technology Data Exchange (ETDEWEB)

    Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.

    2015-05-01

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.

  9. DC Voltage Droop Control Implementation in the AC/DC Power Flow Algorithm: Combinational Approach

    DEFF Research Database (Denmark)

    Akhter, F.; Macpherson, D.E.; Harrison, G.P.

    2015-01-01

    of operational flexibility, as more than one VSC station controls the DC link voltage of the MTDC system. This model enables the study of the effects of DC droop control on the power flows of the combined AC/DC system for steady state studies after VSC station outages or transient conditions without needing...... to use its complete dynamic model. Further, the proposed approach can be extended to include multiple AC and DC grids for combined AC/DC power flow analysis. The algorithm is implemented by modifying the MATPOWER based MATACDC program and the results shows that the algorithm works efficiently....

  10. Multimodal biometric method that combines veins, prints, and shape of a finger

    Science.gov (United States)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo

    2011-01-01

    Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.

  11. Antiviral Combination Approach as a Perspective to Combat Enterovirus Infections.

    Science.gov (United States)

    Galabov, Angel S; Nikolova, Ivanka; Vassileva-Pencheva, Ralitsa; Stoyanova, Adelina

    2015-01-01

    Human enteroviruses distributed worldwide are causative agents of a broad spectrum of diseases with extremely high morbidity, including a series of severe illnesses of the central nervous system, heart, endocrine pancreas, skeleton muscles, etc., as well as the common cold contributing to the development of chronic respiratory diseases, including the chronic obstructive pulmonary disease. The above mentioned diseases along with the significantly high morbidity and mortality in children, as well as in the high-risk populations (immunodeficiencies, neonates) definitely formulate the chemotherapy as the main tool for the control of enterovirus infections. At present, clinically effective antivirals for use in the treatment of enteroviral infection do not exist, in spite of the large amount of work carried out in this field. The main reason for this is the development of drug resistance. We studied the process of development of resistance to the strongest inhibitors of enteroviruses, WIN compounds (VP1 protein hydrophobic pocket blockers), especially in the models in vivo, Coxsackievirus B (CV-B) infections in mice. We introduced the tracing of a panel of phenotypic markers (MIC50 value, plaque shape and size, stability at 50℃, pathogenicity in mice) for characterization of the drug-mutants (resistant and dependent) as a very important stage in the study of enterovirus inhibitors. Moreover, as a result of VP1 RNA sequence analysis performed on the model of disoxaril mutants of CVB1, we determined the molecular basis of the drug-resistance. The monotherapy courses were the only approach used till now. For the first time in the research for anti-enterovirus antivirals our team introduced the testing of combination effect of the selective inhibitors of enterovirus replication with different mode of action. This study resulted in the selection of a number of very effective in vitro double combinations with synergistic effect and a broad spectrum of sensitive

  12. The mechanical properties of stored red blood cells measured by a convenient microfluidic approach combining with mathematic model.

    Science.gov (United States)

    Wang, Ying; You, Guoxing; Chen, Peipei; Li, Jianjun; Chen, Gan; Wang, Bo; Li, Penglong; Han, Dong; Zhou, Hong; Zhao, Lian

    2016-03-01

    The mechanical properties of red blood cells (RBCs) are critical to the rheological and hemodynamic behavior of blood. Although measurements of the mechanical properties of RBCs have been studied for many years, the existing methods, such as ektacytometry, micropipette aspiration, and microfluidic approaches, still have limitations. Mechanical changes to RBCs during storage play an important role in transfusions, and so need to be evaluated pre-transfusion, which demands a convenient and rapid detection method. We present a microfluidic approach that focuses on the mechanical properties of single cell under physiological shear flow and does not require any high-end equipment, like a high-speed camera. Using this method, the images of stretched RBCs under physical shear can be obtained. The subsequent analysis, combined with mathematic models, gives the deformability distribution, the morphology distribution, the normalized curvature, and the Young's modulus (E) of the stored RBCs. The deformability index and the morphology distribution show that the deformability of RBCs decreases significantly with storage time. The normalized curvature, which is defined as the curvature of the cell tail during stretching in flow, suggests that the surface charge of the stored RBCs decreases significantly. According to the mathematic model, which derives from the relation between shear stress and the adherent cells' extension ratio, the Young's moduli of the stored RBCs are also calculated and show significant increase with storage. Therefore, the present method is capable of representing the mechanical properties and can distinguish the mechanical changes of the RBCs during storage. The advantages of this method are the small sample needed, high-throughput, and easy-use, which make it promising for the quality monitoring of RBCs.

  13. Combined action of ionizing radiation with another factor: common rules and theoretical approach

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Roh, Changhyun; Komarova, Ludmila N.; Petin, Vladislav G.

    2013-01-01

    Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)

  14. Combined action of ionizing radiation with another factor: common rules and theoretical approach

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Kyu; Roh, Changhyun, E-mail: jkkim@kaeri.re.kr [Korea Atomic Energy Research Institute, Jeongeup (Korea, Republic of); Komarova, Ludmila N.; Petin, Vladislav G., E-mail: vgpetin@yahoo.com [Medical Radiological Research Center, Obninsk (Russian Federation)

    2013-07-01

    Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)

  15. Co-design of RAD and ETHICS methodologies: a combination of information system development methods

    Science.gov (United States)

    Nasehi, Arezo; Shahriyari, Salman

    2011-12-01

    Co-design is a new trend in the social world which tries to capture different ideas in order to use the most appropriate features for a system. In this paper, co-design of two information system methodologies is regarded; rapid application development (RAD) and effective technical and human implementation of computer-based systems (ETHICS). We tried to consider the characteristics of these methodologies to see the possibility of having a co-design or combination of them for developing an information system. To reach this purpose, four different aspects of them are analyzed: social or technical approach, user participation and user involvement, job satisfaction, and overcoming change resistance. Finally, a case study using the quantitative method is analyzed in order to examine the possibility of co-design using these factors. The paper concludes that RAD and ETHICS are appropriate to be co-designed and brings some suggestions for the co-design.

  16. Combination of uncertainty theories and decision-aiding methods for natural risk management in a context of imperfect information

    Science.gov (United States)

    Tacnet, Jean-Marc; Dupouy, Guillaume; Carladous, Simon; Dezert, Jean; Batton-Hubert, Mireille

    2017-04-01

    In mountain areas, natural phenomena such as snow avalanches, debris-flows and rock-falls, put people and objects at risk with sometimes dramatic consequences. Risk is classically considered as a combination of hazard, the combination of the intensity and frequency of the phenomenon, and vulnerability which corresponds to the consequences of the phenomenon on exposed people and material assets. Risk management consists in identifying the risk level as well as choosing the best strategies for risk prevention, i.e. mitigation. In the context of natural phenomena in mountainous areas, technical and scientific knowledge is often lacking. Risk management decisions are therefore based on imperfect information. This information comes from more or less reliable sources ranging from historical data, expert assessments, numerical simulations etc. Finally, risk management decisions are the result of complex knowledge management and reasoning processes. Tracing the information and propagating information quality from data acquisition to decisions are therefore important steps in the decision-making process. One major goal today is therefore to assist decision-making while considering the availability, quality and reliability of information content and sources. A global integrated framework is proposed to improve the risk management process in a context of information imperfection provided by more or less reliable sources: uncertainty as well as imprecision, inconsistency and incompleteness are considered. Several methods are used and associated in an original way: sequential decision context description, development of specific multi-criteria decision-making methods, imperfection propagation in numerical modeling and information fusion. This framework not only assists in decision-making but also traces the process and evaluates the impact of information quality on decision-making. We focus and present two main developments. The first one relates to uncertainty and imprecision

  17. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    Science.gov (United States)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  18. Life cycle tools combined with multi-criteria and participatory methods for agricultural sustainability: Insights from a systematic and critical review.

    Science.gov (United States)

    De Luca, Anna Irene; Iofrida, Nathalie; Leskinen, Pekka; Stillitano, Teodora; Falcone, Giacomo; Strano, Alfio; Gulisano, Giovanni

    2017-10-01

    Life cycle (LC) methodologies have attracted a great interest in agricultural sustainability assessments, even if, at the same time, they have sometimes been criticized for making unrealistic assumptions and subjective choices. To cope with these weaknesses, Multi-Criteria Decision Analysis (MCDA) and/or participatory methods can be used to balance and integrate different sustainability dimensions. The purpose of this study is to highlight how life cycle approaches were combined with MCDA and participatory methods to address agricultural sustainability in the published scientific literature. A systematic and critical review was developed, highlighting the following features: which multi-criterial and/or participatory methods have been associated with LC tools; how they have been integrated or complemented (methodological relationships); the intensity of the involvement of stakeholders (degree of participation); and which synergies have been achieved by combining the methods. The main typology of integration was represented by multi-criterial frameworks integrating LC evaluations. LC tools can provide MCDA studies with local and global information on how to reduce negative impacts and avoid burden shifts, while MCDA methods can help LC practitioners deal with subjective assumptions in an objective way, to take into consideration actors' values and to overcome trade-offs among the different dimensions of sustainability. Considerations concerning the further development of Life Cycle Sustainability Assessment (LCSA) have been identified as well. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A sequential mixed methods research approach to investigating HIV ...

    African Journals Online (AJOL)

    2016-09-03

    Sep 3, 2016 ... Sequential mixed methods research is an effective approach for ... show the effectiveness of the research method. ... qualitative data before quantitative datasets ..... whereby both types of data are collected simultaneously.

  20. Detecting loci under recent positive selection in dairy and beef cattle by combining different genome-wide scan methods.

    Directory of Open Access Journals (Sweden)

    Yuri Tani Utsunomiya

    Full Text Available As the methodologies available for the detection of positive selection from genomic data vary in terms of assumptions and execution, weak correlations are expected among them. However, if there is any given signal that is consistently supported across different methodologies, it is strong evidence that the locus has been under past selection. In this paper, a straightforward frequentist approach based on the Stouffer Method to combine P-values across different tests for evidence of recent positive selection in common variations, as well as strategies for extracting biological information from the detected signals, were described and applied to high density single nucleotide polymorphism (SNP data generated from dairy and beef cattle (taurine and indicine. The ancestral Bovinae allele state of over 440,000 SNP is also reported. Using this combination of methods, highly significant (P<3.17×10(-7 population-specific sweeps pointing out to candidate genes and pathways that may be involved in beef and dairy production were identified. The most significant signal was found in the Cornichon homolog 3 gene (CNIH3 in Brown Swiss (P = 3.82×10(-12, and may be involved in the regulation of pre-ovulatory luteinizing hormone surge. Other putative pathways under selection are the glucolysis/gluconeogenesis, transcription machinery and chemokine/cytokine activity in Angus; calpain-calpastatin system and ribosome biogenesis in Brown Swiss; and gangliosides deposition in milk fat globules in Gyr. The composite method, combined with the strategies applied to retrieve functional information, may be a useful tool for surveying genome-wide selective sweeps and providing insights in to the source of selection.

  1. A phenomenological approach for the analysis of combined fatigue and creep

    International Nuclear Information System (INIS)

    Bui-Quoc, T.; Biron, A.

    1982-01-01

    An approach is proposed for the life prediction, under cumulative damage conditions, for fatigue and for creep. An interaction effect is introduced to account for a modification in the material behavior due to previous loading. A predictive technique is then developed which is applied to several materials for fatigue and which could potentially be used for creep. With due consideration to the similarity of the formulation for both phenomena, the analysis for the combination of fatigue and creep is then carried out through a straightforward sequential use of the two damage functions. Several patterns are studied without and with an interaction effect. (orig.)

  2. Combining density functional and incremental post-Hartree-Fock approaches for van der Waals dominated adsorbate-surface interactions: Ag{sub 2}/graphene

    Energy Technology Data Exchange (ETDEWEB)

    Lara-Castells, María Pilar de, E-mail: Pilar.deLara.Castells@csic.es [Instituto de Física Fundamental (C.S.I.C.), Serrano 123, E-28006 Madrid (Spain); Mitrushchenkov, Alexander O. [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallée (France); Stoll, Hermann [Institut für Theoretische Chemie, Universität Stuttgart, D-70550 Stuttgart (Germany)

    2015-09-14

    A combined density functional (DFT) and incremental post-Hartree-Fock (post-HF) approach, proven earlier to calculate He-surface potential energy surfaces [de Lara-Castells et al., J. Chem. Phys. 141, 151102 (2014)], is applied to describe the van der Waals dominated Ag{sub 2}/graphene interaction. It extends the dispersionless density functional theory developed by Pernal et al. [Phys. Rev. Lett. 103, 263201 (2009)] by including periodic boundary conditions while the dispersion is parametrized via the method of increments [H. Stoll, J. Chem. Phys. 97, 8449 (1992)]. Starting with the elementary cluster unit of the target surface (benzene), continuing through the realistic cluster model (coronene), and ending with the periodic model of the extended system, modern ab initio methodologies for intermolecular interactions as well as state-of-the-art van der Waals-corrected density functional-based approaches are put together both to assess the accuracy of the composite scheme and to better characterize the Ag{sub 2}/graphene interaction. The present work illustrates how the combination of DFT and post-HF perspectives may be efficient to design simple and reliable ab initio-based schemes in extended systems for surface science applications.

  3. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    DEFF Research Database (Denmark)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come...

  4. Combining Generalized Phase Contrast with matched filtering into a versatile beam shaping approach

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin

    2010-01-01

    We adapt concepts from matched filtering to propose a method for generating reconfigurable multiple beams. Combined with the Generalized Phase Contrast (GPC) technique, the proposed method coined mGPC can yield dynamically reconfigurable optical beam arrays with high light efficiency for optical...... manipulation, high-speed sorting and other parallel spatial light applications [1]....

  5. Computation of the free energy change associated with one-electron reduction of coenzyme immersed in water: a novel approach within the framework of the quantum mechanical/molecular mechanical method combined with the theory of energy representation.

    Science.gov (United States)

    Takahashi, Hideaki; Ohno, Hajime; Kishi, Ryohei; Nakano, Masayoshi; Matubayasi, Nobuyuki

    2008-11-28

    The isoalloxazine ring (flavin ring) is a part of the coenzyme flavin adenine dinucleotide and acts as an active site in the oxidation of a substrate. We have computed the free energy change Deltamicro(red) associated with one-electron reduction of the flavin ring immersed in water by utilizing the quantum mechanical/molecular mechanical method combined with the theory of energy representation (QM/MM-ER method) recently developed. As a novel treatment in implementing the QM/MM-ER method, we have identified the excess charge to be attached on the flavin ring as a solute while the remaining molecules, i.e., flavin ring and surrounding water molecules, are treated as solvent species. Then, the reduction free energy can be decomposed into the contribution Deltamicro(red)(QM) due to the oxidant described quantum chemically and the free energy Deltamicro(red)(MM) due to the water molecules represented by a classical model. By the sum of these contributions, the total reduction free energy Deltamicro(red) has been given as -80.1 kcal/mol. To examine the accuracy and efficiency of this approach, we have also conducted the Deltamicro(red) calculation using the conventional scheme that Deltamicro(red) is constructed from the solvation free energies of the flavin rings at the oxidized and reduced states. The conventional scheme has been implemented with the QM/MM-ER method and the calculated Deltamicro(red) has been estimated as -81.0 kcal/mol, showing excellent agreement with the value given by the new approach. The present approach is efficient, in particular, to compute free energy change for the reaction occurring in a protein since it enables ones to circumvent the numerical problem brought about by subtracting the huge solvation free energies of the proteins in two states before and after the reduction.

  6. Combining phenotypic and proteomic approaches to identify membrane targets in a ‘triple negative’ breast cancer cell type

    Directory of Open Access Journals (Sweden)

    Rust Steven

    2013-02-01

    Full Text Available Abstract Background The continued discovery of therapeutic antibodies, which address unmet medical needs, requires the continued discovery of tractable antibody targets. Multiple protein-level target discovery approaches are available and these can be used in combination to extensively survey relevant cell membranomes. In this study, the MDA-MB-231 cell line was selected for membranome survey as it is a ‘triple negative’ breast cancer cell line, which represents a cancer subtype that is aggressive and has few treatment options. Methods The MDA-MB-231 breast carcinoma cell line was used to explore three membranome target discovery approaches, which were used in parallel to cross-validate the significance of identified antigens. A proteomic approach, which used membrane protein enrichment followed by protein identification by mass spectrometry, was used alongside two phenotypic antibody screening approaches. The first phenotypic screening approach was based on hybridoma technology and the second was based on phage display technology. Antibodies isolated by the phenotypic approaches were tested for cell specificity as well as internalisation and the targets identified were compared to each other as well as those identified by the proteomic approach. An anti-CD73 antibody derived from the phage display-based phenotypic approach was tested for binding to other ‘triple negative’ breast cancer cell lines and tested for tumour growth inhibitory activity in a MDA-MB-231 xenograft model. Results All of the approaches identified multiple cell surface markers, including integrins, CD44, EGFR, CD71, galectin-3, CD73 and BCAM, some of which had been previously confirmed as being tractable to antibody therapy. In total, 40 cell surface markers were identified for further study. In addition to cell surface marker identification, the phenotypic antibody screening approaches provided reagent antibodies for target validation studies. This is illustrated

  7. Combining Partial Least Squares and the Gradient-Boosting Method for Soil Property Retrieval Using Visible Near-Infrared Shortwave Infrared Spectra

    Directory of Open Access Journals (Sweden)

    Lanfa Liu

    2017-12-01

    Full Text Available Soil spectroscopy has experienced a tremendous increase in soil property characterisation, and can be used not only in the laboratory but also from the space (imaging spectroscopy. Partial least squares (PLS regression is one of the most common approaches for the calibration of soil properties using soil spectra. Besides functioning as a calibration method, PLS can also be used as a dimension reduction tool, which has scarcely been studied in soil spectroscopy. PLS components retained from high-dimensional spectral data can further be explored with the gradient-boosted decision tree (GBDT method. Three soil sample categories were extracted from the Land Use/Land Cover Area Frame Survey (LUCAS soil library according to the type of land cover (woodland, grassland, and cropland. First, PLS regression and GBDT were separately applied to build the spectroscopic models for soil organic carbon (OC, total nitrogen content (N, and clay for each soil category. Then, PLS-derived components were used as input variables for the GBDT model. The results demonstrate that the combined PLS-GBDT approach has better performance than PLS or GBDT alone. The relative important variables for soil property estimation revealed by the proposed method demonstrated that the PLS method is a useful dimension reduction tool for soil spectra to retain target-related information.

  8. Methods to improve genomic prediction and GWAS using combined Holstein populations

    DEFF Research Database (Denmark)

    Li, Xiujin

    The thesis focuses on methods to improve GWAS and genomic prediction using combined Holstein populations and investigations G by E interaction. The conclusions are: 1) Prediction reliabilities for Brazilian Holsteins can be increased by adding Nordic and Frensh genotyped bulls and a large G by E...... interaction exists between populations. 2) Combining data from Chinese and Danish Holstein populations increases the power of GWAS and detects new QTL regions for milk fatty acid traits. 3) The novel multi-trait Bayesian model efficiently estimates region-specific genomic variances, covariances...

  9. Face recognition by combining eigenface method with different wavelet subbands

    Institute of Scientific and Technical Information of China (English)

    MA Yan; LI Shun-bao

    2006-01-01

    @@ A method combining eigenface with different wavelet subbands for face recognition is proposed.Each training image is decomposed into multi-subbands for extracting their eigenvector sets and projection vectors.In the recognition process,the inner product distance between the projection vectors of the test image and that of the training image are calculated.The training image,corresponding to the maximum distance under the given threshold condition,is considered as the final result.The experimental results on the ORL and YALE face database show that,compared with the eigenface method directly on the image domain or on a single wavelet subband,the recognition accuracy using the proposed method is improved by 5% without influencing the recognition speed.

  10. Iterative approach as alternative to S-matrix in modal methods

    Science.gov (United States)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  11. Combination of the optical waveguide lightmode spectroscopy method with electrochemical measurements

    Energy Technology Data Exchange (ETDEWEB)

    Szendro, I.; Erdelyi, K.; Fabian, M. [MicroVacuum Ltd., Kerekgyarto u.: 10, H-1147 Budapest (Hungary); Puskas, Zs. [Minvasive Ltd., Goldmann Gy. ter 3., H-1111 Budapest (Hungary); Adanyi, N. [Central Food Research Institute, H-1537 Budapest, P.O.B. 393 (Hungary); Somogyi, K. [MicroVacuum Ltd., Kerekgyarto u.: 10, H-1147 Budapest (Hungary)], E-mail: karoly.somogyi@microvacuum.com

    2008-09-30

    Optical waveguides are normally sensitive to the surrounding media and also to the surface contaminations. The effective refractive index changes at the surface. Various sensor systems were developed based on this effect. One of the most sensitive and effective methods is the optical waveguide lightmode spectroscopy (OWLS). At the same time, electrochemical methods are widely used both in inorganic and organic chemistry, and also advantages in microbiology were demonstrated. In this work, efforts are made and results are presented for the combination of these two methods for simultaneous measurement of refractive index and electrical current changes caused by the presence of cells/molecules/ions to be investigated. An electrically conductive indium tin oxide (ITO) nanolayer is deposited and activated on the top of the OWLS planar waveguide oxide layer. ITO layers serve as working electrodes in the electrochemical measurements. The basic setup and an integrated system are demonstrated here. Measurements using H{sub 2}O{sub 2}, toluidine blue solutions, and KCl and TRIS solutions as buffer and transport media are represented. Measurements show both the changes detected by the sensor layer and the effect of the applied potential in cyclic and chrono voltammetric measurements. Results demonstrate an effective combination of optical and electrochemical methods.

  12. A Combined Approach of Sensor Data Fusion and Multivariate Geostatistics for Delineation of Homogeneous Zones in an Agricultural Field

    Directory of Open Access Journals (Sweden)

    Annamaria Castrignanò

    2017-12-01

    Full Text Available To assess spatial variability at the very fine scale required by Precision Agriculture, different proximal and remote sensors have been used. They provide large amounts and different types of data which need to be combined. An integrated approach, using multivariate geostatistical data-fusion techniques and multi-source geophysical sensor data to determine simple summary scale-dependent indices, is described here. These indices can be used to delineate management zones to be submitted to differential management. Such a data fusion approach with geophysical sensors was applied in a soil of an agronomic field cropped with tomato. The synthetic regionalized factors determined, contributed to split the 3D edaphic environment into two main horizontal structures with different hydraulic properties and to disclose two main horizons in the 0–1.0-m depth with a discontinuity probably occurring between 0.40 m and 0.70 m. Comparing this partition with the soil properties measured with a shallow sampling, it was possible to verify the coherence in the topsoil between the dielectric properties and other properties more directly related to agronomic management. These results confirm the advantages of using proximal sensing as a preliminary step in the application of site-specific management. Combining disparate spatial data (data fusion is not at all a naive problem and novel and powerful methods need to be developed.

  13. A Combined Approach of Sensor Data Fusion and Multivariate Geostatistics for Delineation of Homogeneous Zones in an Agricultural Field.

    Science.gov (United States)

    Castrignanò, Annamaria; Buttafuoco, Gabriele; Quarto, Ruggiero; Vitti, Carolina; Langella, Giuliano; Terribile, Fabio; Venezia, Accursio

    2017-12-03

    To assess spatial variability at the very fine scale required by Precision Agriculture, different proximal and remote sensors have been used. They provide large amounts and different types of data which need to be combined. An integrated approach, using multivariate geostatistical data-fusion techniques and multi-source geophysical sensor data to determine simple summary scale-dependent indices, is described here. These indices can be used to delineate management zones to be submitted to differential management. Such a data fusion approach with geophysical sensors was applied in a soil of an agronomic field cropped with tomato. The synthetic regionalized factors determined, contributed to split the 3D edaphic environment into two main horizontal structures with different hydraulic properties and to disclose two main horizons in the 0-1.0-m depth with a discontinuity probably occurring between 0.40 m and 0.70 m. Comparing this partition with the soil properties measured with a shallow sampling, it was possible to verify the coherence in the topsoil between the dielectric properties and other properties more directly related to agronomic management. These results confirm the advantages of using proximal sensing as a preliminary step in the application of site-specific management. Combining disparate spatial data (data fusion) is not at all a naive problem and novel and powerful methods need to be developed.

  14. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Robot Evaluation and Selection with Entropy-Based Combination Weighting and Cloud TODIM Approach

    Directory of Open Access Journals (Sweden)

    Jing-Jing Wang

    2018-05-01

    Full Text Available Nowadays robots have been commonly adopted in various manufacturing industries to improve product quality and productivity. The selection of the best robot to suit a specific production setting is a difficult decision making task for manufacturers because of the increase in complexity and number of robot systems. In this paper, we explore two key issues of robot evaluation and selection: the representation of decision makers’ diversified assessments and the determination of the ranking of available robots. Specifically, a decision support model which utilizes cloud model and TODIM (an acronym in Portuguese of interactive and multiple criteria decision making method is developed for the purpose of handling robot selection problems with hesitant linguistic information. Besides, we use an entropy-based combination weighting technique to estimate the weights of evaluation criteria. Finally, we illustrate the proposed cloud TODIM approach with a robot selection example for an automobile manufacturer, and further validate its effectiveness and benefits via a comparative analysis. The results show that the proposed robot selection model has some unique advantages, which is more realistic and flexible for robot selection under a complex and uncertain environment.

  16. Human Activity-Understanding: A Multilayer Approach Combining Body Movements and Contextual Descriptors Analysis

    Directory of Open Access Journals (Sweden)

    Consuelo Granata

    2015-07-01

    Full Text Available A deep understanding of human activity is key to successful human-robot interaction (HRI. The translation of sensed human behavioural signals/cues and context descriptors into an encoded human activity remains a challenge because of the complex nature of human actions. In this paper, we propose a multilayer framework for the understanding of human activity to be implemented in a mobile robot. It consists of a perception layer which exploits a D-RGB-based skeleton tracking output used to simulate a physical model of virtual human dynamics in order to compensate for the inaccuracy and inconsistency of the raw data. A multi-support vector machine (MSVM model trained with features describing the human motor coordination through temporal segments in combination with environment descriptors (object affordance is used to recognize each sub-activity (classification layer. The interpretation of sequences of classified elementary actions is based on discrete hidden Markov models (DHMMs (interpretation layer. The framework assessment was performed on the Cornell Activity Dataset (CAD-120 [1]. The performances of our method are comparable with those presented in [2] and clearly show the relevance of this model-based approach.

  17. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  18. Environmental impact efficiency of natural gas combined cycle power plants: A combined life cycle assessment and dynamic data envelopment analysis approach.

    Science.gov (United States)

    Martín-Gamboa, Mario; Iribarren, Diego; Dufour, Javier

    2018-02-15

    The energy sector is still dominated by the use of fossil resources. In particular, natural gas represents the third most consumed resource, being a significant source of electricity in many countries. Since electricity production in natural gas combined cycle (NGCC) plants provides some benefits with respect to other non-renewable technologies, it is often seen as a transitional solution towards a future low‑carbon power generation system. However, given the environmental profile and operational variability of NGCC power plants, their eco-efficiency assessment is required. In this respect, this article uses a novel combined Life Cycle Assessment (LCA) and dynamic Data Envelopment Analysis (DEA) approach in order to estimate -over the period 2010-2015- the environmental impact efficiencies of 20 NGCC power plants located in Spain. A three-step LCA+DEA method is applied, which involves data acquisition, calculation of environmental impacts through LCA, and the novel estimation of environmental impact efficiency (overall- and term-efficiency scores) through dynamic DEA. Although only 1 out of 20 NGCC power plants is found to be environmentally efficient, all plants show a relatively good environmental performance with overall eco-efficiency scores above 60%. Regarding individual periods, 2011 was -on average- the year with the highest environmental impact efficiency (95%), accounting for 5 efficient NGCC plants. In this respect, a link between high number of operating hours and high environmental impact efficiency is observed. Finally, preliminary environmental benchmarks are presented as an additional outcome in order to further support decision-makers in the path towards eco-efficiency in NGCC power plants. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. ANALYSIS OF COMBINED UAV-BASED RGB AND THERMAL REMOTE SENSING DATA: A NEW APPROACH TO CROWD MONITORING

    Directory of Open Access Journals (Sweden)

    S. Schulte

    2017-08-01

    Full Text Available Collecting vast amount of data does not solely help to fulfil information needs related to crowd monitoring, it is rather important to collect data that is suitable to meet specific information requirements. In order to address this issue, a prototype is developed to facilitate the combination of UAV-based RGB and thermal remote sensing datasets. In an experimental approach, image sensors were mounted on a remotely piloted aircraft and captured two video datasets over a crowd. A group of volunteers performed diverse movements that depict real world scenarios. The prototype is deriving the movement on the ground and is programmed in MATLAB. This novel detection approach using combined data is afterwards evaluated against detection algorithms that only use a single data source. Our tests show that the combination of RGB and thermal remote sensing data is beneficial for the field of crowd monitoring regarding the detection of crowd movement.

  20. Combining flow cytometry and 16S rRNA gene pyrosequencing: a promising approach for drinking water monitoring and characterization.

    Science.gov (United States)

    Prest, E I; El-Chakhtoura, J; Hammes, F; Saikaly, P E; van Loosdrecht, M C M; Vrouwenvelder, J S

    2014-10-15

    The combination of flow cytometry (FCM) and 16S rRNA gene pyrosequencing data was investigated for the purpose of monitoring and characterizing microbial changes in drinking water distribution systems. High frequency sampling (5 min intervals for 1 h) was performed at the outlet of a treatment plant and at one location in the full-scale distribution network. In total, 52 bulk water samples were analysed with FCM, pyrosequencing and conventional methods (adenosine-triphosphate, ATP; heterotrophic plate count, HPC). FCM and pyrosequencing results individually showed that changes in the microbial community occurred in the water distribution system, which was not detected with conventional monitoring. FCM data showed an increase in the total bacterial cell concentrations (from 345 ± 15 × 10(3) to 425 ± 35 × 10(3) cells mL(-1)) and in the percentage of intact bacterial cells (from 39 ± 3.5% to 53 ± 4.4%) during water distribution. This shift was also observed in the FCM fluorescence fingerprints, which are characteristic of each water sample. A similar shift was detected in the microbial community composition as characterized with pyrosequencing, showing that FCM and genetic fingerprints are congruent. FCM and pyrosequencing data were subsequently combined for the calculation of cell concentration changes for each bacterial phylum. The results revealed an increase in cell concentrations of specific bacterial phyla (e.g., Proteobacteria), along with a decrease in other phyla (e.g., Actinobacteria), which could not be concluded from the two methods individually. The combination of FCM and pyrosequencing methods is a promising approach for future drinking water quality monitoring and for advanced studies on drinking water distribution pipeline ecology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Global Practical Stabilization and Tracking for an Underactuated Ship - A Combined Averaging and Backstepping Approach

    Directory of Open Access Journals (Sweden)

    Kristin Y. Pettersen

    1999-10-01

    Full Text Available We solve both the global practical stabilization and tracking problem for an underactuated ship, using a combined integrator backstepping and averaging approach. Exponential convergence to an arbitrarily small neighbourhood of the origin and of the reference trajectory, respectively, is proved. Simulation results are included.

  2. A method of meta-mechanism combination and replacement based on motion study

    Directory of Open Access Journals (Sweden)

    Yadong Fang

    2015-01-01

    Full Text Available Lacking the effective methods to reduce labor and cost, many small- and medium-sized assembly companies are facing with the problem of high cost for a long time. In order to reduce costs of manual operations, the method of meta-mechanism combination and replacement is studied. In this paper, we mainly discuss assembling motion analysis, workpieces position information acquisition, motion library construction, assembling motion analysis by Maynard’s operation sequence technique, meta-mechanism database establishment, and match of motion and mechanism. At the same time, the principle, process, and system realization framework of mechanism replacement are introduced. Lastly, problems for low-cost automation of the production line are basically resolved by operator motion analysis and meta-mechanism combination and match.

  3. Optimal Combinations of Diagnostic Tests Based on AUC.

    Science.gov (United States)

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  4. Modifiable Combining Functions

    OpenAIRE

    Cohen, Paul; Shafer, Glenn; Shenoy, Prakash P.

    2013-01-01

    Modifiable combining functions are a synthesis of two common approaches to combining evidence. They offer many of the advantages of these approaches and avoid some disadvantages. Because they facilitate the acquisition, representation, explanation, and modification of knowledge about combinations of evidence, they are proposed as a tool for knowledge engineers who build systems that reason under uncertainty, not as a normative theory of evidence.

  5. An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method

    Directory of Open Access Journals (Sweden)

    Jibum Kim

    2014-01-01

    Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.

  6. A Combined Fault Diagnosis Method for Power Transformer in Big Data Environment

    Directory of Open Access Journals (Sweden)

    Yan Wang

    2017-01-01

    Full Text Available The fault diagnosis method based on dissolved gas analysis (DGA is of great significance to detect the potential faults of the transformer and improve the security of the power system. The DGA data of transformer in smart grid have the characteristics of large quantity, multiple types, and low value density. In view of DGA big data’s characteristics, the paper first proposes a new combined fault diagnosis method for transformer, in which a variety of fault diagnosis models are used to make a preliminary diagnosis, and then the support vector machine is used to make the second diagnosis. The method adopts the intelligent complementary and blending thought, which overcomes the shortcomings of single diagnosis model in transformer fault diagnosis, and improves the diagnostic accuracy and the scope of application of the model. Then, the training and deployment strategy of the combined diagnosis model is designed based on Storm and Spark platform, which provides a solution for the transformer fault diagnosis in big data environment.

  7. Application of combined shrinkage stoping and pillarless sublevel caving mining method to a uranium deposit

    International Nuclear Information System (INIS)

    Fan Changjun

    2012-01-01

    Pillarless sublevel caving mining method was used to mining ores in a uranium mine. Because ore-rock interface changed greatly, this part of ores can not be recovered effectively in the mining process, resulting in the permanent loss of these ores. Aimed at the problem, a combined shrinkage stoping and pillarless sublevel caving mining method is presented. Practices show that the ore recovery is increased, dilution rate is declined, and mining safety is improved greatly by using the combined method. (authors)

  8. Combining AHP and DEA Methods for Selecting a Project Manager

    Directory of Open Access Journals (Sweden)

    Baruch Keren

    2014-07-01

    Full Text Available A project manager has a major influence on the success or failure of the project. A good project manager can match between the strategy and objectives of the organization and the goals of the project. Therefore, the selection of the appropriate project manager is a key factor for the success of the project. A potential project manager is judged by his or her proven performance and personal qualifications. This paper proposes a method to calculate the weighted scores and the full rank of candidates for managing a project, and to select the best of those candidates. The proposed method combines specific methodologies: the Data Envelopment Analysis (DEA and the Analytical Hierarchical Process (AHP and uses DEA Ranking Methods to enhance selection.

  9. Recognition of chemical entities: combining dictionary-based and grammar-based approaches

    Science.gov (United States)

    2015-01-01

    Background The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. Results The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. Conclusions We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named

  10. Recognition of chemical entities: combining dictionary-based and grammar-based approaches.

    Science.gov (United States)

    Akhondi, Saber A; Hettne, Kristina M; van der Horst, Eelke; van Mulligen, Erik M; Kors, Jan A

    2015-01-01

    The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named entity recognition, outperforming

  11. Combining a weighted caseload study with an organisational analysis in courts: first experiences with a new methodological approach in Switzerland

    Directory of Open Access Journals (Sweden)

    Daniela Winkler

    2015-07-01

    Full Text Available Determining the weighted caseload, i.e. the average amount of work time used for processing cases of different case categories, using different methodological approaches of weighted caseload studies results in case weights that indicate the current performance of a court. However, as the weighted caseload is often used in allocating resources or cases, the results of a weighted caseload study may be contested with the argument it is not clear whether they are based on an average good performance or whether higher or lower values could be assumed if operational management were optimised or qualitative aspects taken into account. Suitable methods therefore usually include quality adjustments of the weighted caseload. Also, the values can be validated using benchmarking. In Switzerland there is a general lack of workload measurement in courts. Therefore, in an analysis of the courts and in the Cantonal Prosecutor’s Office of a Swiss canton another method of validating weighted caseload values has been applied: the combination of a weighted caseload study with an organisational analysis. This paper introduces the new methodological approach and outlines preliminary methodological findings.

  12. Mapping Mixed Methods Research: Methods, Measures, and Meaning

    Science.gov (United States)

    Wheeldon, J.

    2010-01-01

    This article explores how concept maps and mind maps can be used as data collection tools in mixed methods research to combine the clarity of quantitative counts with the nuance of qualitative reflections. Based on more traditional mixed methods approaches, this article details how the use of pre/post concept maps can be used to design qualitative…

  13. Promising ethical arguments for product differentiation in the organic food sector. A mixed methods research approach.

    Science.gov (United States)

    Zander, Katrin; Stolz, Hanna; Hamm, Ulrich

    2013-03-01

    Ethical consumerism is a growing trend worldwide. Ethical consumers' expectations are increasing and neither the Fairtrade nor the organic farming concept covers all the ethical concerns of consumers. Against this background the aim of this research is to elicit consumers' preferences regarding organic food with additional ethical attributes and their relevance at the market place. A mixed methods research approach was applied by combining an Information Display Matrix, Focus Group Discussions and Choice Experiments in five European countries. According to the results of the Information Display Matrix, 'higher animal welfare', 'local production' and 'fair producer prices' were preferred in all countries. These three attributes were discussed with Focus Groups in depth, using rather emotive ways of labelling. While the ranking of the attributes was the same, the emotive way of communicating these attributes was, for the most part, disliked by participants. The same attributes were then used in Choice Experiments, but with completely revised communication arguments. According to the results of the Focus Groups, the arguments were presented in a factual manner, using short and concise statements. In this research step, consumers in all countries except Austria gave priority to 'local production'. 'Higher animal welfare' and 'fair producer prices' turned out to be relevant for buying decisions only in Germany and Switzerland. According to our results, there is substantial potential for product differentiation in the organic sector through making use of production standards that exceed existing minimum regulations. The combination of different research methods in a mixed methods approach proved to be very helpful. The results of earlier research steps provided the basis from which to learn - findings could be applied in subsequent steps, and used to adjust and deepen the research design. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. New approach to equipment quality evaluation method with distinct functions

    Directory of Open Access Journals (Sweden)

    Milisavljević Vladimir M.

    2016-01-01

    Full Text Available The paper presents new approach for improving method for quality evaluation and selection of equipment (devices and machinery by applying distinct functions. Quality evaluation and selection of devices and machinery is a multi-criteria problem which involves the consideration of numerous parameters of various origins. Original selection method with distinct functions is based on technical parameters with arbitrary evaluation of each parameter importance (weighting. Improvement of this method, presented in this paper, addresses the issue of weighting of parameters by using Delphi Method. Finally, two case studies are provided, which included quality evaluation of standard boilers for heating and evaluation of load-haul-dump (LHD machines, to demonstrate applicability of this approach. Analytical Hierarchical Process (AHP is used as a control method.

  15. Comprehensive Evaluation of the Sustainable Development of Power Grid Enterprises Based on the Model of Fuzzy Group Ideal Point Method and Combination Weighting Method with Improved Group Order Relation Method and Entropy Weight Method

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2017-10-01

    Full Text Available As an important implementing body of the national energy strategy, grid enterprises bear the important responsibility of optimizing the allocation of energy resources and serving the economic and social development, and their levels of sustainable development have a direct impact on the national economy and social life. In this paper, the model of fuzzy group ideal point method and combination weighting method with improved group order relation method and entropy weight method is proposed to evaluate the sustainable development of power grid enterprises. Firstly, on the basis of consulting a large amount of literature, the important criteria of the comprehensive evaluation of the sustainable development of power grid enterprises are preliminarily selected. The opinions of the industry experts are consulted and fed back for many rounds through the Delphi method and the evaluation criteria system for sustainable development of power grid enterprises is determined, then doing the consistent and non dimensional processing of the evaluation criteria. After that, based on the basic order relation method, the weights of each expert judgment matrix are synthesized to construct the compound matter elements. By using matter element analysis, the subjective weights of the criteria are obtained. And entropy weight method is used to determine the objective weights of the preprocessed criteria. Then, combining the subjective and objective information with the combination weighting method based on the subjective and objective weighted attribute value consistency, a more comprehensive, reasonable and accurate combination weight is calculated. Finally, based on the traditional TOPSIS method, the triangular fuzzy numbers are introduced to better realize the scientific processing of the data information which is difficult to quantify, and the queuing indication value of each object and the ranking result are obtained. A numerical example is taken to prove that the

  16. Application of 1 D Finite Element Method in Combination with Laminar Solution Method for Pipe Network Analysis

    Science.gov (United States)

    Dudar, O. I.; Dudar, E. S.

    2017-11-01

    The features of application of the 1D dimensional finite element method (FEM) in combination with the laminar solutions method (LSM) for the calculation of underground ventilating networks are considered. In this case the processes of heat and mass transfer change the properties of a fluid (binary vapour-air mix). Under the action of gravitational forces it leads to such phenomena as natural draft, local circulation, etc. The FEM relations considering the action of gravity, the mass conservation law, the dependence of vapour-air mix properties on the thermodynamic parameters are derived so that it allows one to model the mentioned phenomena. The analogy of the elastic and plastic rod deformation processes to the processes of laminar and turbulent flow in a pipe is described. Owing to this analogy, the guaranteed convergence of the elastic solutions method for the materials of plastic type means the guaranteed convergence of the LSM for any regime of a turbulent flow in a rough pipe. By means of numerical experiments the convergence rate of the FEM - LSM is investigated. This convergence rate appeared much higher than the convergence rate of the Cross - Andriyashev method. Data of other authors on the convergence rate comparison for the finite element method, the Newton method and the method of gradient are provided. These data allow one to conclude that the FEM in combination with the LSM is one of the most effective methods of calculation of hydraulic and ventilating networks. The FEM - LSM has been used for creation of the research application programme package “MineClimate” allowing to calculate the microclimate parameters in the underground ventilating networks.

  17. Combining 2-m temperature nowcasting and short range ensemble forecasting

    Directory of Open Access Journals (Sweden)

    A. Kann

    2011-12-01

    Full Text Available During recent years, numerical ensemble prediction systems have become an important tool for estimating the uncertainties of dynamical and physical processes as represented in numerical weather models. The latest generation of limited area ensemble prediction systems (LAM-EPSs allows for probabilistic forecasts at high resolution in both space and time. However, these systems still suffer from systematic deficiencies. Especially for nowcasting (0–6 h applications the ensemble spread is smaller than the actual forecast error. This paper tries to generate probabilistic short range 2-m temperature forecasts by combining a state-of-the-art nowcasting method and a limited area ensemble system, and compares the results with statistical methods. The Integrated Nowcasting Through Comprehensive Analysis (INCA system, which has been in operation at the Central Institute for Meteorology and Geodynamics (ZAMG since 2006 (Haiden et al., 2011, provides short range deterministic forecasts at high temporal (15 min–60 min and spatial (1 km resolution. An INCA Ensemble (INCA-EPS of 2-m temperature forecasts is constructed by applying a dynamical approach, a statistical approach, and a combined dynamic-statistical method. The dynamical method takes uncertainty information (i.e. ensemble variance from the operational limited area ensemble system ALADIN-LAEF (Aire Limitée Adaptation Dynamique Développement InterNational Limited Area Ensemble Forecasting which is running operationally at ZAMG (Wang et al., 2011. The purely statistical method assumes a well-calibrated spread-skill relation and applies ensemble spread according to the skill of the INCA forecast of the most recent past. The combined dynamic-statistical approach adapts the ensemble variance gained from ALADIN-LAEF with non-homogeneous Gaussian regression (NGR which yields a statistical mbox{correction} of the first and second moment (mean bias and dispersion for Gaussian distributed continuous

  18. A method to evaluate equitable accessibility : Combining ethical theories and accessibility-based approaches

    NARCIS (Netherlands)

    Lucas, K.; Van Wee, G.P.; Maat, C.

    2015-01-01

    In this paper, we present the case that traditional transport appraisal methods do not sufficiently capture the social dimensions of mobility and accessibility. However, understanding this is highly relevant for policymakers to understand the impacts of their transport decisions. These dimensions

  19. Dynamic translabial ultrasound versus echodefecography combined with the endovaginal approach to assess pelvic floor dysfunctions: How effective are these techniques?

    Science.gov (United States)

    Murad-Regadas, S M; Karbage, S A; Bezerra, L S; Regadas, F S P; da Silva Vilarinho, A; Borges, L B; Regadas Filho, F S P; Veras, L B

    2017-07-01

    The aim of this study was to evaluate the role of dynamic translabial ultrasound (TLUS) in the assessment of pelvic floor dysfunction and compare the results with echodefecography (EDF) combined with the endovaginal approach. Consecutive female patients with pelvic floor dysfunction were eligible. Each patient was assessed with EDF combined with the endovaginal approach and TLUS. The diagnostic accuracy of the TLUS was evaluated using the results of EDF as the standard for comparison. A total of 42 women were included. Four sphincter defects were identified with both techniques, and EDF clearly showed if the defect was partial or total and additionally identified the pubovisceral muscle defect. There was substantial concordance regarding normal relaxation and anismus. Perfect concordance was found with rectocele and cystocele. The rectocele depth was measured with TLUS and quantified according to the EDF classification. Fair concordance was found for intussusception. There was no correlation between the displacement of the puborectal muscle at maximum straining on EDF with the displacement of the anorectal junction (ARJ), compared at rest with maximal straining on TLUS to determine perineal descent (PD). The mean ARJ displacement was similar in patients with normal and those with excessive PD on TLUS. Both modalities can be used as a method to assess pelvic floor dysfunction. The EDF using 3D anorectal and endovaginal approaches showed advantages in identification of the anal sphincters and pubodefects (partial or total). There was good correlation between the two techniques, and a TLUS rectocele classification based on size that corresponds to the established classification using EDF was established.

  20. Combining p-values in replicated single-case experiments with multivariate outcome.

    Science.gov (United States)

    Solmi, Francesca; Onghena, Patrick

    2014-01-01

    Interest in combining probabilities has a long history in the global statistical community. The first steps in this direction were taken by Ronald Fisher, who introduced the idea of combining p-values of independent tests to provide a global decision rule when multiple aspects of a given problem were of interest. An interesting approach to this idea of combining p-values is the one based on permutation theory. The methods belonging to this particular approach exploit the permutation distributions of the tests to be combined, and use a simple function to combine probabilities. Combining p-values finds a very interesting application in the analysis of replicated single-case experiments. In this field the focus, while comparing different treatments effects, is more articulated than when just looking at the means of the different populations. Moreover, it is often of interest to combine the results obtained on the single patients in order to get more global information about the phenomenon under study. This paper gives an overview of how the concept of combining p-values was conceived, and how it can be easily handled via permutation techniques. Finally, the method of combining p-values is applied to a simulated replicated single-case experiment, and a numerical illustration is presented.

  1. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  2. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  3. Algebraic Verification Method for SEREs Properties via Groebner Bases Approaches

    Directory of Open Access Journals (Sweden)

    Ning Zhou

    2013-01-01

    Full Text Available This work presents an efficient solution using computer algebra system to perform linear temporal properties verification for synchronous digital systems. The method is essentially based on both Groebner bases approaches and symbolic simulation. A mechanism for constructing canonical polynomial set based symbolic representations for both circuit descriptions and assertions is studied. We then present a complete checking algorithm framework based on these algebraic representations by using Groebner bases. The computational experience result in this work shows that the algebraic approach is a quite competitive checking method and will be a useful supplement to the existent verification methods based on simulation.

  4. Combining biophysical methods for the analysis of protein complex stoichiometry and affinity in SEDPHAT

    International Nuclear Information System (INIS)

    Zhao, Huaying; Schuck, Peter

    2015-01-01

    Global multi-method analysis for protein interactions (GMMA) can increase the precision and complexity of binding studies for the determination of the stoichiometry, affinity and cooperativity of multi-site interactions. The principles and recent developments of biophysical solution methods implemented for GMMA in the software SEDPHAT are reviewed, their complementarity in GMMA is described and a new GMMA simulation tool set in SEDPHAT is presented. Reversible macromolecular interactions are ubiquitous in signal transduction pathways, often forming dynamic multi-protein complexes with three or more components. Multivalent binding and cooperativity in these complexes are often key motifs of their biological mechanisms. Traditional solution biophysical techniques for characterizing the binding and cooperativity are very limited in the number of states that can be resolved. A global multi-method analysis (GMMA) approach has recently been introduced that can leverage the strengths and the different observables of different techniques to improve the accuracy of the resulting binding parameters and to facilitate the study of multi-component systems and multi-site interactions. Here, GMMA is described in the software SEDPHAT for the analysis of data from isothermal titration calorimetry, surface plasmon resonance or other biosensing, analytical ultracentrifugation, fluorescence anisotropy and various other spectroscopic and thermodynamic techniques. The basic principles of these techniques are reviewed and recent advances in view of their particular strengths in the context of GMMA are described. Furthermore, a new feature in SEDPHAT is introduced for the simulation of multi-method data. In combination with specific statistical tools for GMMA in SEDPHAT, simulations can be a valuable step in the experimental design

  5. Novel approach to the fabrication of an artificial small bone using a combination of sponge replica and electrospinning methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yang-Hee; Lee, Byong-Taek, E-mail: lbt@sch.ac.kr [Department of Biomedical Engineering and Materials, School of Medicine, Soonchunhyang University 366-1, Ssangyong-dong, Cheonan, Chungnam 330-090 (Korea, Republic of)

    2011-06-15

    In this study, a novel artificial small bone consisting of ZrO{sub 2}-biphasic calcium phosphate/polymethylmethacrylate-polycaprolactone-hydroxyapatite (ZrO{sub 2}-BCP/PMMA-PCL-HAp) was fabricated using a combination of sponge replica and electrospinning methods. To mimic the cancellous bone, the ZrO{sub 2}/BCP scaffold was composed of three layers, ZrO{sub 2}, ZrO{sub 2}/BCP and BCP, fabricated by the sponge replica method. The PMMA-PCL fibers loaded with HAp powder were wrapped around the ZrO{sub 2}/BCP scaffold using the electrospinning process. To imitate the Haversian canal region of the bone, HAp-loaded PMMA-PCL fibers were wrapped around a steel wire of 0.3 mm diameter. As a result, the bundles of fiber wrapped around the wires imitated the osteon structure of the cortical bone. Finally, the ZrO{sub 2}/BCP scaffold was surrounded by HAp-loaded PMMA-PCL composite bundles. After removal of the steel wires, the ZrO{sub 2}/BCP scaffold and bundles of HAp-loaded PMMA-PCL formed an interconnected structure resembling the human bone. Its diameter, compressive strength and porosity were approximately 12 mm, 5 MPa and 70%, respectively, and the viability of MG-63 osteoblast-like cells was determined to be over 90% by the MTT (3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) assay. This artificial bone shows excellent cytocompatibility and is a promising bone regeneration material.

  6. Combined perventricular septal defect closure and patent ductus arteriosus ligation via the lower ministernotomy approach.

    Science.gov (United States)

    Voitov, Alexey; Omelchenko, Alexander; Gorbatykh, Yuriy; Bogachev-Prokophiev, Alexander; Karaskov, Alexander

    2018-02-01

    Over the past decade, minimally invasive approaches have been advocated for surgical correction of congenital defects to reduce costs related to hospitalization and for improved cosmesis. Minimal skin incisions and partial sternotomy reduce surgical trauma, however these techniques might not be successful in treating a number of congenital pathological conditions, particularly for combined congenital defects. We focused on cases with a combined presentation of ventricular septal defect and patent ductus arteriosus. We studied 12 infants who successfully underwent surgical treatment for a combined single-stage ventricular septal defect and patent ductus arteriosus closure through a lower ministernotomy without using cardiopulmonary bypass and X-rays. No intraoperative and early postoperative complications or mortality were noted. Postoperative echocardiography did not reveal residual shunts. The proposed technique is safe and reproducible in infants. © Crown copyright 2017.

  7. Studies of the Raman Spectra of Cyclic and Acyclic Molecules: Combination and Prediction Spectrum Methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taijin; Assary, Rajeev S.; Marshall, Christopher L.; Gosztola, David J.; Curtiss, Larry A.; Stair, Peter C.

    2012-04-02

    A combination of Raman spectroscopy and density functional methods was employed to investigate the spectral features of selected molecules: furfural, 5-hydroxymethyl furfural (HMF), methanol, acetone, acetic acid, and levulinic acid. The computed spectra and measured spectra are in excellent agreement, consistent with previous studies. Using the combination and prediction spectrum method (CPSM), we were able to predict the important spectral features of two platform chemicals, HMF and levulinic acid.The results have shown that CPSM is a useful alternative method for predicting vibrational spectra of complex molecules in the biomass transformation process.

  8. Treatment of waste water by a combined technique of radiation and conventional method

    International Nuclear Information System (INIS)

    Sakumoto, A.; Miyata, T.

    1984-01-01

    Treatment of waste water by radiation in combination with a conventional method such as biological oxidation, coagulation with Fe 2 (SO 4 ) 3 , and ozonation has been studied for reducing necessary dose. Ethylene glycol ethers, polyoxyethylene n-nonyl phenyl ether (NPE), polyvinyl alcohol (PVA), ethylene glycol, phenol, and oxalic acid were used as a model pollutant. The combined use of radiation and biological oxidation markedly improved the removal of TOC in aqueous oxygenated solution of ethylene glycol ethers. The combined use of radiation and coagulation had remarkable effects on the reduction of TOC in aqueous deoxygenated solution of NPE or PVA. The simultaneous use of radiation and ozone gave a synergistic effect on oxidative degradation of organic pollutants. The synergistic effect was suggested to arise from chain reactions having a powerful oxidizing agent (OH radical). The rate of TOC removal by the process depended on dose rate. Aqueous solution of 150 mg/l oxalic acid was treated by the combined use of electron beams and ozone using a new type of irradiation vessel to reduce TOC with G(-TOC) of 87 at 2.3 x 10 7 rad/h. The simultaneous use of radiation and ozone is superior to the removal of TOC by other combined methods and can be applied irrespective of the type of organic matter. (author)

  9. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  10. A novel combined interventional radiologic and hepatobiliary surgical approach to a complex traumatic hilar biliary stricture

    Directory of Open Access Journals (Sweden)

    Rachel E. NeMoyer

    Full Text Available Introduction: Benign strictures of the biliary system are challenging and uncommon conditions requiring a multidisciplinary team for appropriate management. Presentation of case: The patient is a 32-year-old male that developed a hilar stricture as sequelae of a gunshot wound. Due to the complex nature of the stricture and scarring at the porta hepatis a combined interventional radiologic and surgical approach was carried out to approach the hilum of the right and left hepatic ducts. The location of this stricture was found by ultrasound guidance intraoperatively using a balloon tipped catheter placed under fluoroscopy in the interventional radiology suite prior to surgery. This allowed the surgeons to select the line of parenchymal transection for best visualization of the stricture. A left hepatectomy was performed, the internal stent located and the right hepatic duct opened tangentially to allow a side-to-side Roux-en-Y hepaticojejunostomy (a Puestow-like anastomosis. Discussion: Injury to the intrahepatic biliary ductal confluence is rarely fatal, however, the associated injuries lead to severe morbidity as seen in this example. Management of these injuries poses a considerable challenge to the surgeon and treating physicians. Conclusion: Here we describe an innovative multi-disciplinary approach to the repair of this rare injury. Keywords: Combined approach, Interventional radiology, Hepatobiliary surgery, Complex traumatic hilar biliary stricture, Case report

  11. Exploring a Flipped Classroom Approach in a Japanese Language Classroom: A Mixed Methods Study

    Science.gov (United States)

    Prefume, Yuko Enomoto

    2015-01-01

    A flipped classroom approach promotes active learning and increases teacher-student interactions by maximizing face-to-face class time (Hamdan, McKnight, Mcknight, Arfstrom, & Arfstrom, 2013). In this study, "flipped classroom" is combined with the use of technology and is described as an instructional approach that provides lectures…

  12. Combining multiple decisions: applications to bioinformatics

    International Nuclear Information System (INIS)

    Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods

  13. Newton’s method an updated approach of Kantorovich’s theory

    CERN Document Server

    Ezquerro Fernández, José Antonio

    2017-01-01

    This book shows the importance of studying semilocal convergence in iterative methods through Newton's method and addresses the most important aspects of the Kantorovich's theory including implicated studies. Kantorovich's theory for Newton's method used techniques of functional analysis to prove the semilocal convergence of the method by means of the well-known majorant principle. To gain a deeper understanding of these techniques the authors return to the beginning and present a deep-detailed approach of Kantorovich's theory for Newton's method, where they include old results, for a historical perspective and for comparisons with new results, refine old results, and prove their most relevant results, where alternative approaches leading to new sufficient semilocal convergence criteria for Newton's method are given. The book contains many numerical examples involving nonlinear integral equations, two boundary value problems and systems of nonlinear equations related to numerous physical phenomena. The book i...

  14. Multi-UAV Flight using Virtual Structure Combined with Behavioral Approach

    Directory of Open Access Journals (Sweden)

    Kownacki Cezary

    2016-06-01

    Full Text Available Implementations of multi-UAV systems can be divided mainly into two different approaches, centralised system that synchronises positions of each vehicle by a ground station and an autonomous system based on decentralised control, which offers more flexibility and independence. Decentralisation of multi-UAV control entails the need for information sharing between all vehicles, what in some cases could be problematic due to a significant amount of data to be sent over the wireless network. To improve the reliability and the throughput of information sharing inside the formation of UAVs, this paper proposes an approach that combines virtual structure with a leader and two flocking behaviours. Each UAV has assigned different virtual migration point referenced to the leader's position which is simultaneously the origin of a formation reference frame. All migration points create together a virtual rigid structure. Each vehicle uses local behaviours of cohesion and repulsion respectively, to track its own assigned point in the structure and to avoid a collision with the previous UAV in the structure. To calculate parameters of local behaviours, each UAV should know position and attitude of the leader to define the formation reference frame and also the actual position of the previous UAV in the structure. Hence, information sharing can be based on a chain of local peer-to-peer communication between two consecutive vehicles in the structure. In such solution, the information about the leader could be sequentially transmitted from one UAV to another. Numerical simulations were prepared and carried out to verify the effectiveness of the presented approach. Trajectories recorded during those simulations show collective, coherence and collision-free flights of the formation created with five UAVs.

  15. Modified Lip Repositioning with Esthetic Crown Lengthening: A Combined Approach to Treating Excessive Gingival Display.

    Science.gov (United States)

    Sánchez, Isis M; Gaud-Quintana, Sadja; Stern, Jacob K

    Lip repositioning surgery to address excessive gingival display induced by different etiologies has received major attention recently. Several techniques and variations have been reported, including myotomy or repositioning of the levator labii superioris muscle, Le Fort impaction, maxillary gingivectomies, botulinum toxin injections, and lip stabilization. This study reports a case of excessive gingival display treated by a modified combined approach. A 25-year-old woman with a 4- to 8-mm gingival display when smiling caused by a combination of short clinical crowns induced by an altered passive eruption and hypermobility of the upper lip underwent a staged esthetic crown-lengthening procedure followed by a modified lip repositioning technique. A description of the technique and a comparison with other modes of therapy is discussed. This modified approach for treating the hypermobile lip included a bilateral removal of a partial-thickness strip of mucosa from the maxillary buccal vestibule without severing the muscle, leaving the midline frenum intact and suturing the lip mucosa to the mucogingival line. The narrower vestibule and increased tooth length resulted in a symmetric and pleasing gingival display when smiling that remained stable over time. With proper diagnosis and sequence of therapy, modified lip repositioning surgery combined with esthetic crown lengthening can be used predictably to treat excessive gingival display and enhance smile esthetics.

  16. METHOD FOR SELECTION OF PROJECT MANAGEMENT APPROACH BASED ON FUZZY CONCEPTS

    Directory of Open Access Journals (Sweden)

    Igor V. KONONENKO

    2017-03-01

    Full Text Available Literature analysis of works that devoted to research of the selection a project management approach and development of effective methods for this problem solution is given. Mathematical model and method for selection of project management approach with fuzzy concepts of applicability of existing approaches are proposed. The selection is made of such approaches as the PMBOK Guide, the ISO21500 standard, the PRINCE2 methodology, the SWEBOK Guide, agile methodologies Scrum, XP, and Kanban. The number of project parameters which have a great impact on the result of the selection and measure of their impact is determined. Project parameters relate to information about the project, team, communication, critical project risks. They include the number of people involved in the project, the customer's experience with this project team, the project team's experience in this field, the project team's understanding of requirements, adapting ability, initiative, and others. The suggested method is considered on the example of its application for selection a project management approach to software development project.

  17. Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence

    Science.gov (United States)

    Lewis, Nicholas; Grünwald, Peter

    2018-03-01

    Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).

  18. An enhanced unified uncertainty analysis approach based on first order reliability method with single-level optimization

    International Nuclear Information System (INIS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; Tooren, Michel van

    2013-01-01

    In engineering, there exist both aleatory uncertainties due to the inherent variation of the physical system and its operational environment, and epistemic uncertainties due to lack of knowledge and which can be reduced with the collection of more data. To analyze the uncertain distribution of the system performance under both aleatory and epistemic uncertainties, combined probability and evidence theory can be employed to quantify the compound effects of the mixed uncertainties. The existing First Order Reliability Method (FORM) based Unified Uncertainty Analysis (UUA) approach nests the optimization based interval analysis in the improved Hasofer–Lind–Rackwitz–Fiessler (iHLRF) algorithm based Most Probable Point (MPP) searching procedure, which is computationally inhibitive for complex systems and may encounter convergence problem as well. Therefore, in this paper it is proposed to use general optimization solvers to search MPP in the outer loop and then reformulate the double-loop optimization problem into an equivalent single-level optimization (SLO) problem, so as to simplify the uncertainty analysis process, improve the robustness of the algorithm, and alleviate the computational complexity. The effectiveness and efficiency of the proposed method is demonstrated with two numerical examples and one practical satellite conceptual design problem. -- Highlights: ► Uncertainty analysis under mixed aleatory and epistemic uncertainties is studied. ► A unified uncertainty analysis method is proposed with combined probability and evidence theory. ► The traditional nested analysis method is converted to single level optimization for efficiency. ► The effectiveness and efficiency of the proposed method are testified with three examples

  19. Combining Primary Prevention and Risk Reduction Approaches in Sexual Assault Protection Programming

    Science.gov (United States)

    Menning, Chadwick; Holtzman, Mellisa

    2015-01-01

    Objective: The object of this study is to extend prior evaluations of Elemental, a sexual assault protection program that combines primary prevention and risk reduction strategies within a single program. Participants and Methods: During 2012 and 2013, program group and control group students completed pretest, posttest, and 6-week and 6-month…

  20. State of the art in non-animal approaches for skin sensitization testing: from individual test methods towards testing strategies.

    Science.gov (United States)

    Ezendam, Janine; Braakhuis, Hedwig M; Vandebriel, Rob J

    2016-12-01

    The hazard assessment of skin sensitizers relies mainly on animal testing, but much progress is made in the development, validation and regulatory acceptance and implementation of non-animal predictive approaches. In this review, we provide an update on the available computational tools and animal-free test methods for the prediction of skin sensitization hazard. These individual test methods address mostly one mechanistic step of the process of skin sensitization induction. The adverse outcome pathway (AOP) for skin sensitization describes the key events (KEs) that lead to skin sensitization. In our review, we have clustered the available test methods according to the KE they inform: the molecular initiating event (MIE/KE1)-protein binding, KE2-keratinocyte activation, KE3-dendritic cell activation and KE4-T cell activation and proliferation. In recent years, most progress has been made in the development and validation of in vitro assays that address KE2 and KE3. No standardized in vitro assays for T cell activation are available; thus, KE4 cannot be measured in vitro. Three non-animal test methods, addressing either the MIE, KE2 or KE3, are accepted as OECD test guidelines, and this has accelerated the development of integrated or defined approaches for testing and assessment (e.g. testing strategies). The majority of these approaches are mechanism-based, since they combine results from multiple test methods and/or computational tools that address different KEs of the AOP to estimate skin sensitization potential and sometimes potency. Other approaches are based on statistical tools. Until now, eleven different testing strategies have been published, the majority using the same individual information sources. Our review shows that some of the defined approaches to testing and assessment are able to accurately predict skin sensitization hazard, sometimes even more accurate than the currently used animal test. A few defined approaches are developed to provide an

  1. Attitudes Toward Combining Psychological, Mind-Body Therapies and Nutritional Approaches for the Enhancement of Mood.

    Science.gov (United States)

    Lores, Taryn Jade; Henke, Miriam; Chur-Hansen, Anna

    2016-01-01

    Context • Interest has been rising in the use of complementary and alternative medicine (CAM) for the promotion of health and treatment of disease. To date, the majority of CAM research has focused on exploring the demographic characteristics, attitudes, and motivations of CAM users and on the efficacy of different therapies and products. Less is known with respect to the psychological characteristics of people who use CAM. Previous research has not investigated the usefulness of integrating mind-body therapies with natural products in a combined mood intervention. Objective • The study intended to investigate attitudes toward a proposed new approach to the treatment of mood, one that integrates psychological mind-body therapies and natural nutritional products. Design • Participants completed an online survey covering demographics, personality traits, locus of control, use of CAM, attitudes toward the proposed psychonutritional approach, and mood. Setting • This study was conducted at the University of Adelaide School of Psychology (Adelaide, SA, Australia). Participants • Participants were 333 members of the Australian general public, who were recruited online via the social-media platform Facebook. The majority were women (83.2%), aged between 18 and 81 y. Outcome Measures • Measures included the Multidimensional Health Locus of Control Scale Form B, the Ten-Item Personality Inventory, and the Depression, Anxiety and Stress Scale. Results • Participants were positive about the proposed approach and were likely to try it to enhance their moods. The likeliness of use of the combined approach was significantly higher in the female participants and was associated with higher levels of the personality trait openness and an internal health locus of control, after controlling for all other variables. Conclusions • Interest exists for an intervention for mood that incorporates both psychological and nutritional approaches. Further research into the

  2. Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method

    CERN Document Server

    Kulchitskii, Yu A

    2000-01-01

    Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.

  3. The lod score method.

    Science.gov (United States)

    Rice, J P; Saccone, N L; Corbett, J

    2001-01-01

    The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  4. Approaches to systems biology. Four methods to study single-cell gene expression, cell motility, antibody reactivity, and respiratory metabolism

    DEFF Research Database (Denmark)

    Hagedorn, Peter

    To understand how complex systems, such as cells, function, comprehensive Measurements of their constituent parts must be made. This can be achieved by combining methods that are each optimized to measure specific parts of the system. Four such methods,each covering a different area, are presented...... from such measurements allows models of the system to be developed and tested. For each of the methods, such analysis and modelling approaches have beenapplied and are presented: Differentially regulated genes are identified and classified according to function; cell-specfic motility models...... are developed that can distinguish between different surfaces; a method for selecting repertoires of antigens thatseparate mice based on their response to treatment is developed; and the observed concentrations of free and bound NADH is used to build and test a basic model of respiratory metabolism...

  5. Optimum Combination and Effect Analysis of Piezoresistor Dimensions in Micro Piezoresistive Pressure Sensor Using Design of Experiments and ANOVA: a Taguchi Approach

    Directory of Open Access Journals (Sweden)

    Kirankumar B. Balavalad

    2017-04-01

    Full Text Available Piezoresistive (PZR pressure sensors have gained importance because of their robust construction, high sensitivity and good linearity. The conventional PZR pressure sensor consists of 4 piezoresistors placed on diaphragm and are connected in the form of Wheatstone bridge. These sensors convert stress applied on them into change in resistance, which is quantified into voltage using Wheatstone bridge mechanism. It is observed form the literature that, the dimensions of piezoresistors are very crucial in the performance of the piezoresistive pressure sensor. This paper presents, a novel mechanism of finding best combinations and effect of individual piezoresistors dimensions viz., Length, Width and Thickness, using DoE and ANOVA (Analysis of Variance method, following Taguchi experimentation approach. The paper presents a unique method to find optimum combination of piezoresistors dimensions and also clearly illustrates the effect the dimensions on the output of the sensor. The optimum combinations and the output response of sensor is predicted using DoE and the validation simulation is done. The result of the validation simulation is compared with the predicted value of sensor response i.e., V. Predicted value of V is 1.074 V and the validation simulation gave the response for V as 1.19 V. This actually validates that the model (DoE and ANOVA is adequate in describing V in terms of the variables defined.

  6. Combined optimal-pathlengths method for near-infrared spectroscopy analysis

    International Nuclear Information System (INIS)

    Liu Rong; Xu Kexin; Lu Yanhui; Sun Huili

    2004-01-01

    Near-infrared (NIR) spectroscopy is a rapid, reagent-less and nondestructive analytical technique, which is being increasingly employed for quantitative application in chemistry, pharmaceutics and food industry, and for the optical analysis of biological tissue. The performance of NIR technology greatly depends on the abilities to control and acquire data from the instrument and to calibrate and analyse data. Optical pathlength is a key parameter of the NIR instrument, which has been thoroughly discussed in univariate quantitative analysis in the presence of photometric errors. Although multiple wavelengths can provide more chemical information, it is difficult to determine a single pathlength that is suitable for each wavelength region. A theoretical investigation of a selection procedure for multiple pathlengths, called the combined optimal-pathlengths (COP) method, is identified in this paper and an extensive comparison with the single pathlength method is also performed on simulated and experimental NIR spectral data sets. The results obtained show that the COP method can greatly improve the prediction accuracy in NIR spectroscopy quantitative analysis

  7. Calorimetric investigations of solids by combined FPPE - TWRC method

    International Nuclear Information System (INIS)

    Dadarlat, D; Streza, M; Pop, M N; Tosa, V; Delenclos, S; Longuemart, S; Sahraoui, A Hadj

    2010-01-01

    An alternative photopyroelectric (PPE) technique that combines the front detection configuration (FPPE) with the thermal-wave-resonator-cavity (TWRC) method is proposed for direct measurement of thermal effusivity of solid materials inserted as backings in the FPPE detection cell. The method uses the scan of the normalized PPE phase as a function of sample's thickness, in the thermally thin regime for the sensor and liquid layer, and thermally thick regime for the backing material. The value of backing's thermal effusivity results from the optimization of the fit of the phase of the signal, performed with sample's thickness and backing's thermal effusivity as fitting parameters. The paper presents experimental results on several solid materials (inserted as backing in the cell), with different values of thermal effusivity (brass, steel, wood) and two liquids (ethylene glycol, water) largely used in the FPPE-TWRC cells. The paper stresses mainly on the sensitivity of the technique to different liquid/backing effusivity ratios. It seems that the method is suitable especially when investigating solids with values of thermal effusivity close to the effusivity of the liquid layer.

  8. A Combined group EA-PROMETHEE method for a supplier selection problem

    Directory of Open Access Journals (Sweden)

    Hamid Reza Rezaee Kelidbari

    2016-07-01

    Full Text Available One of the important decisions which impacts all firms’ activities is the supplier selection problem. Since the 1950s, several works have addressed this problem by treating different aspects and instances. In this paper, a combined multiple criteria decision making (MCDM technique (EA-PROMETHEE has been applied to implement a proper decision making. To this aim, after reviewing the theoretical background regarding to supplier selection, the extension analysis (EA is used to determine the importance of criteria and PROMETHEE for appraisal of suppliers based on the criteria. An empirical example illustrated the proposed approach.

  9. Feasible approach of contactless power transfer technology combined with HTS coils based on electromagnetic resonance coupling

    International Nuclear Information System (INIS)

    Chang, Yoon Do; Yim, Seong Woo; Hwang, Si Dole

    2013-01-01

    The contactless power transfer (CPT) systems have been recently gaining popularity widely since it is an available option to realize the power delivery and storage with connector-free devices across a large air gap. Especially, the CPT with electromagnetic resonance coupling method is possible to exchange energy within 2 m efficiently. However, the power transfer efficiency of CPT in commercialized products has been limited because the impedance matching of coupled coils is sensitive. As a reasonable approach, we combined the CPT system with HTS wire technology and called as, superconducting contactless power transfer (SUCPT) system. Since the superconducting coils have an enough current density, the superconducting antenna and receiver coils at CPT system have a merit to deliver and receive a mass amount of electric energy. In this paper, we present the feasibility of the SUCPT system and examine the transmission properties of SUCPT phenomenon between room temperature and very low temperature at 77 K as long as the receiver is within 1.0 m distance.

  10. A Modified Generalized Fisher Method for Combining Probabilities from Dependent Tests

    Directory of Open Access Journals (Sweden)

    Hongying (Daisy eDai

    2014-02-01

    Full Text Available Rapid developments in molecular technology have yielded a large amount of high throughput genetic data to understand the mechanism for complex traits. The increase of genetic variants requires hundreds and thousands of statistical tests to be performed simultaneously in analysis, which poses a challenge to control the overall Type I error rate. Combining p-values from multiple hypothesis testing has shown promise for aggregating effects in high-dimensional genetic data analysis. Several p-value combining methods have been developed and applied to genetic data; see [Dai, et al. 2012b] for a comprehensive review. However, there is a lack of investigations conducted for dependent genetic data, especially for weighted p-value combining methods. Single nucleotide polymorphisms (SNPs are often correlated due to linkage disequilibrium. Other genetic data, including variants from next generation sequencing, gene expression levels measured by microarray, protein and DNA methylation data, etc. also contain complex correlation structures. Ignoring correlation structures among genetic variants may lead to severe inflation of Type I error rates for omnibus testing of p-values. In this work, we propose modifications to the Lancaster procedure by taking the correlation structure among p-values into account. The weight function in the Lancaster procedure allows meaningful biological information to be incorporated into the statistical analysis, which can increase the power of the statistical testing and/or remove the bias in the process. Extensive empirical assessments demonstrate that the modified Lancaster procedure largely reduces the Type I error rates due to correlation among p-values, and retains considerable power to detect signals among p-values. We applied our method to reassess published renal transplant data, and identified a novel association between B cell pathways and allograft tolerance.

  11. Book Review: Comparative Education Research: Approaches and Methods

    Directory of Open Access Journals (Sweden)

    Noel Mcginn

    2014-10-01

    Full Text Available Book Review Comparative Education Research: Approaches and Methods (2nd edition By Mark Bray, Bob Adamson and Mark Mason (Eds. (2014, 453p ISBN: 978-988-17852-8-2, Hong Kong: Comparative Education Research Centre and Springer

  12. Methodical approaches to development of classification state methods of regulation business activity in fishery

    OpenAIRE

    She Son Gun

    2014-01-01

    Approaches to development of classification of the state methods of regulation of economy are considered. On the basis of the provided review the complex method of state regulation of business activity is reasonable. The offered principles allow improving public administration and can be used in industry concepts and state programs on support of small business in fishery.

  13. Combined acute ecotoxicity of malathion and deltamethrin to Daphnia magna (Crustacea, Cladocera): comparison of different data analysis approaches.

    Science.gov (United States)

    Toumi, Héla; Boumaiza, Moncef; Millet, Maurice; Radetski, Claudemir Marcos; Camara, Baba Issa; Felten, Vincent; Masfaraud, Jean-François; Férard, Jean-François

    2018-04-19

    We studied the combined acute effect (i.e., after 48 h) of deltamethrin (a pyrethroid insecticide) and malathion (an organophosphate insecticide) on Daphnia magna. Two approaches were used to examine the potential interaction effects of eight mixtures of deltamethrin and malathion: (i) calculation of mixture toxicity index (MTI) and safety factor index (SFI) and (ii) response surface methodology coupled with isobole-based statistical model (using generalized linear model). According to the calculation of MTI and SFI, one tested mixture was found additive while the two other tested mixtures were found no additive (MTI) or antagonistic (SFI), but these differences between index responses are only due to differences in terminology related to these two indexes. Through the surface response approach and isobologram analysis, we concluded that there was a significant antagonistic effect of the binary mixtures of deltamethrin and malathion that occurs on D. magna immobilization, after 48 h of exposure. Index approaches and surface response approach with isobologram analysis are complementary. Calculation of mixture toxicity index and safety factor index allows identifying punctually the type of interaction for several tested mixtures, while the surface response approach with isobologram analysis integrates all the data providing a global outcome about the type of interactive effect. Only the surface response approach and isobologram analysis allowed the statistical assessment of the ecotoxicological interaction. Nevertheless, we recommend the use of both approaches (i) to identify the combined effects of contaminants and (ii) to improve risk assessment and environmental management.

  14. Improved approach for determining thin layer thermal conductivity using the 3 ω method. Application to porous Si thermal conductivity in the temperature range 77–300 K

    International Nuclear Information System (INIS)

    Valalaki, K; Nassiopoulou, A G

    2017-01-01

    An improved approach for determining thermal conductivity using the 3 ω method was used to determine anisotropic porous Si thermal conductivity in the temperature range 77–300 K. In this approach, thermal conductivity is extracted from experimental data of the third harmonic of the voltage (3 ω ) as a function of frequency, combined with consequent FEM simulations. The advantage is that within this approach the finite thickness of the sample and the heater are taken into account so that the corresponding errors introduced in thermal conductivity values when using Cahill’s simplified analytical formula are eliminated. The developed method constitutes a useful tool for measuring the thermal conductivity of samples with unknown thermal properties. The thermal conductivity measurements with the 3 ω method are discussed and compared with those obtained using the well-established dc method. (paper)

  15. Methods for magnetostatic field calculation

    International Nuclear Information System (INIS)

    Vorozhtsov, S.B.

    1984-01-01

    Two methods for magnetostatic field calculation: differential and integrat are considered. Both approaches are shown to have certain merits and drawbacks, choice of the method depend on the type of the solved problem. An opportunity of combination of these tWo methods in one algorithm (hybrid method) is considered

  16. Best practices for learning physiology: combining classroom and online methods.

    Science.gov (United States)

    Anderson, Lisa C; Krichbaum, Kathleen E

    2017-09-01

    Physiology is a requisite course for many professional allied health programs and is a foundational science for learning pathophysiology, health assessment, and pharmacology. Given the demand for online learning in the health sciences, it is important to evaluate the efficacy of online and in-class teaching methods, especially as they are combined to form hybrid courses. The purpose of this study was to compare two hybrid physiology sections in which one section was offered mostly in-class (85% in-class), and the other section was offered mostly online (85% online). The two sections in 2 yr ( year 1 and year 2 ) were compared in terms of knowledge of physiology measured in exam scores and pretest-posttest improvement, and in measures of student satisfaction with teaching. In year 1 , there were some differences on individual exam scores between the two sections, but no significant differences in mean exam scores or in pretest-posttest improvements. However, in terms of student satisfaction, the mostly in-class students in year 1 rated the instructor significantly higher than did the mostly online students. Comparisons between in-class and online students in the year 2 cohort yielded data that showed that mean exam scores were not statistically different, but pre-post changes were significantly greater in the mostly online section; student satisfaction among mostly online students also improved significantly. Education researchers must investigate effective combinations of in-class and online methods for student learning outcomes, while maintaining the flexibility and convenience that online methods provide. Copyright © 2017 the American Physiological Society.

  17. Quantum Mechanics/Molecular Mechanics Method Combined with Hybrid All-Atom and Coarse-Grained Model: Theory and Application on Redox Potential Calculations.

    Science.gov (United States)

    Shen, Lin; Yang, Weitao

    2016-04-12

    We developed a new multiresolution method that spans three levels of resolution with quantum mechanical, atomistic molecular mechanical, and coarse-grained models. The resolution-adapted all-atom and coarse-grained water model, in which an all-atom structural description of the entire system is maintained during the simulations, is combined with the ab initio quantum mechanics and molecular mechanics method. We apply this model to calculate the redox potentials of the aqueous ruthenium and iron complexes by using the fractional number of electrons approach and thermodynamic integration simulations. The redox potentials are recovered in excellent accordance with the experimental data. The speed-up of the hybrid all-atom and coarse-grained water model renders it computationally more attractive. The accuracy depends on the hybrid all-atom and coarse-grained water model used in the combined quantum mechanical and molecular mechanical method. We have used another multiresolution model, in which an atomic-level layer of water molecules around redox center is solvated in supramolecular coarse-grained waters for the redox potential calculations. Compared with the experimental data, this alternative multilayer model leads to less accurate results when used with the coarse-grained polarizable MARTINI water or big multipole water model for the coarse-grained layer.

  18. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  19. New, rapid method to measure dissolved silver concentration in silver nanoparticle suspensions by aggregation combined with centrifugation

    International Nuclear Information System (INIS)

    Dong, Feng; Valsami-Jones, Eugenia; Kreft, Jan-Ulrich

    2016-01-01

    It is unclear whether the antimicrobial activities of silver nanoparticles (AgNPs) are exclusively mediated by the release of silver ions (Ag"+) or, instead, are due to combined nanoparticle and silver ion effects. Therefore, it is essential to quantify dissolved Ag in nanosilver suspensions for investigations of nanoparticle toxicity. We developed a method to measure dissolved Ag in Ag"+/AgNPs mixtures by combining aggregation of AgNPs with centrifugation. We also describe the reproducible synthesis of stable, uncoated AgNPs. Uncoated AgNPs were quickly aggregated by 2 mM Ca"2"+, forming large clusters that could be sedimented in a low-speed centrifuge. At 20,100g, the sedimentation time of AgNPs was markedly reduced to 30 min due to Ca"2"+-mediated aggregation, confirmed by the measurements of Ag content in supernatants with graphite furnace atomic absorption spectrometry. No AgNPs were detected in the supernatant by UV–Vis absorption spectra after centrifuging the aggregates. Our approach provides a convenient and inexpensive way to separate dissolved Ag from AgNPs, avoiding long ultracentrifugation times or Ag"+ adsorption to ultrafiltration membranes.

  20. New, rapid method to measure dissolved silver concentration in silver nanoparticle suspensions by aggregation combined with centrifugation

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feng, E-mail: fengdongub@gmail.com [University of Birmingham, Institute of Microbiology and Infection, School of Biosciences (United Kingdom); Valsami-Jones, Eugenia [University of Birmingham, School of Geography, Earth and Environmental Sciences (United Kingdom); Kreft, Jan-Ulrich [University of Birmingham, Institute of Microbiology and Infection, School of Biosciences (United Kingdom)

    2016-09-15

    It is unclear whether the antimicrobial activities of silver nanoparticles (AgNPs) are exclusively mediated by the release of silver ions (Ag{sup +}) or, instead, are due to combined nanoparticle and silver ion effects. Therefore, it is essential to quantify dissolved Ag in nanosilver suspensions for investigations of nanoparticle toxicity. We developed a method to measure dissolved Ag in Ag{sup +}/AgNPs mixtures by combining aggregation of AgNPs with centrifugation. We also describe the reproducible synthesis of stable, uncoated AgNPs. Uncoated AgNPs were quickly aggregated by 2 mM Ca{sup 2+}, forming large clusters that could be sedimented in a low-speed centrifuge. At 20,100g, the sedimentation time of AgNPs was markedly reduced to 30 min due to Ca{sup 2+}-mediated aggregation, confirmed by the measurements of Ag content in supernatants with graphite furnace atomic absorption spectrometry. No AgNPs were detected in the supernatant by UV–Vis absorption spectra after centrifuging the aggregates. Our approach provides a convenient and inexpensive way to separate dissolved Ag from AgNPs, avoiding long ultracentrifugation times or Ag{sup +} adsorption to ultrafiltration membranes.

  1. Treatment of liquid separated from sludge by the method using electron beam and ozone in combination

    International Nuclear Information System (INIS)

    Hosono, Masakazu; Arai, Hidehiko; Aizawa, Masaki; Shimooka, Toshio; Shimizu, Ken; Sugiyama, Masashi.

    1995-01-01

    Since the liquid separated from sludge in the dehydration or concentration process of sewer sludge contains considerable amount of organic compositions that are hard to be decomposed by microorganisms, it has become difficult to be treated by conventional activated sludge process. In the case of discharging the separated liquid into closed water areas, the higher quality treatment is required. The method of using electron beam irradiation and ozone oxidation in combination for cleaning the liquid separated from sludge was examined, therefore, the results are reported. The water quality of the sample from the sludge treatment plant in A City is shown. The method of bio-pretreatment, the treatment method by using electron beam and ozone in combination, and the method of analyzing the water quality are described. The effect of the treatment by activated sludge process, as the effect of the treatment by the combined use of electron beam and ozone, the change of COD and TOC, the change of chromaticity, the change of gel chromatogram, and the reaction mechanism are reported. In this paper, only the basic concept on the model plant for applying the method of the combined use of electron beam and ozone to the treatment of the liquid separated from sludge is discussed. (K.I.)

  2. An approach for flood monitoring by the combined use of Landsat 8 optical imagery and COSMO-SkyMed radar imagery

    Science.gov (United States)

    Tong, Xiaohua; Luo, Xin; Liu, Shuguang; Xie, Huan; Chao, Wei; Liu, Shuang; Liu, Shijie; Makhinov, A. N.; Makhinova, A. F.; Jiang, Yuying

    2018-02-01

    Remote sensing techniques offer potential for effective flood detection with the advantages of low-cost, large-scale, and real-time surface observations. The easily accessible data sources of optical remote sensing imagery provide abundant spectral information for accurate surface water body extraction, and synthetic aperture radar (SAR) systems represent a powerful tool for flood monitoring because of their all-weather capability. This paper introduces a new approach for flood monitoring by the combined use of both Landsat 8 optical imagery and COSMO-SkyMed radar imagery. Specifically, the proposed method applies support vector machine and the active contour without edges model for water extent determination in the periods before and during the flood, respectively. A map difference method is used for the flood inundation analysis. The proposed approach is particularly suitable for large-scale flood monitoring, and it was tested on a serious flood that occurred in northeastern China in August 2013, which caused immense loss of human lives and properties. High overall accuracies of 97.46% for the optical imagery and 93.70% for the radar imagery are achieved by the use of the techniques presented in this study. The results show that about 12% of the whole study area was inundated, corresponding to 5466 km2 of land surface.

  3. Study on Differential Algebraic Method of Aberrations up to Arbitrary Order for Combined Electromagnetic Focusing Systems

    Institute of Scientific and Technical Information of China (English)

    CHENG Min; TANG Tiantong; YAO Zhenhua; ZHU Jingping

    2001-01-01

    Differential algebraic method is apowerful technique in computer numerical analysisbased on nonstandard analysis and formal series the-ory. It can compute arbitrary high order derivativeswith excellent accuracy. The principle of differentialalgebraic method is applied to calculate high orderaberrations of combined electromagnetic focusing sys-tems. As an example, third-order geometric aberra-tion coefficients of an actual combined electromagneticfocusing system were calculated. The arbitrary highorder aberrations are conveniently calculated by dif-ferential algebraic method and the fifth-order aberra-tion diagrams are given.

  4. Combined Power Quality Disturbances Recognition Using Wavelet Packet Entropies and S-Transform

    Directory of Open Access Journals (Sweden)

    Zhigang Liu

    2015-08-01

    Full Text Available Aiming at the combined power quality +disturbance recognition, an automated recognition method based on wavelet packet entropy (WPE and modified incomplete S-transform (MIST is proposed in this paper. By combining wavelet packet Tsallis singular entropy, energy entropy and MIST, a 13-dimension vector of different power quality (PQ disturbances including single disturbances and combined disturbances is extracted. Then, a ruled decision tree is designed to recognize the combined disturbances. The proposed method is tested and evaluated using a large number of simulated PQ disturbances and some real-life signals, which include voltage sag, swell, interruption, oscillation transient, impulsive transient, harmonics, voltage fluctuation and their combinations. In addition, the comparison of the proposed recognition approach with some existing techniques is made. The experimental results show that the proposed method can effectively recognize the single and combined PQ disturbances.

  5. Who runs public health? A mixed-methods study combining qualitative and network analyses.

    Science.gov (United States)

    Oliver, Kathryn; de Vocht, Frank; Money, Annemarie; Everett, Martin

    2013-09-01

    Persistent health inequalities encourage researchers to identify new ways of understanding the policy process. Informal relationships are implicated in finding evidence and making decisions for public health policy (PHP), but few studies use specialized methods to identify key actors in the policy process. We combined network and qualitative data to identify the most influential individuals in PHP in a UK conurbation and describe their strategies to influence policy. Network data were collected by asking for nominations of powerful and influential people in PHP (n = 152, response rate 80%), and 23 semi-structured interviews were analysed using a framework approach. The most influential PHP makers in this conurbation were mid-level managers in the National Health Service and local government, characterized by managerial skills: controlling policy processes through gate keeping key organizations, providing policy content and managing selected experts and executives to lead on policies. Public health professionals and academics are indirectly connected to policy via managers. The most powerful individuals in public health are managers, not usually considered targets for research. As we show, they are highly influential through all stages of the policy process. This study shows the importance of understanding the daily activities of influential policy individuals.

  6. Analysis of random response of structure with uncertain parameters. Combination of substructure synthesis method and hierarchy method

    International Nuclear Information System (INIS)

    Iwatsubo, Takuzo; Kawamura, Shozo; Mori, Hiroyuki.

    1995-01-01

    In this paper, the method to obtain the random response of a structure with uncertain parameters is proposed. The proposed method is a combination of the substructure synthesis method and the hierarchy method. The concept of the proposed method is that the hierarchy equation of each substructure is obtained using the hierarchy method, and the hierarchy equation of the overall structure is obtained using the substructure synthesis method. Using the proposed method, the reduced order hierarchy equation can be obtained without analyzing the original whole structure. After the calculation of the mean square value of response, the reliability analysis can be carried out based on the first passage problem and Poisson's excursion rate. As a numerical example of structure, a simple piping system is considered. The damping constant of the support is considered as the uncertainty parameter. Then the random response is calculated using the proposed method. As a result, the proposed method is useful to analyze the random response in terms of the accuracy, computer storage and calculation time. (author)

  7. Laparoscopic complete mesocolic excision via combined medial and cranial approaches for transverse colon cancer.

    Science.gov (United States)

    Mori, Shinichiro; Kita, Yoshiaki; Baba, Kenji; Yanagi, Masayuki; Tanabe, Kan; Uchikado, Yasuto; Kurahara, Hiroshi; Arigami, Takaaki; Uenosono, Yoshikazu; Mataki, Yuko; Okumura, Hiroshi; Nakajo, Akihiro; Maemura, Kosei; Natsugoe, Shoji

    2017-05-01

    To evaluate the safety and feasibility of laparoscopic complete mesocolic excision via combined medial and cranial approaches with three-dimensional visualization around the gastrocolic trunk and middle colic vessels for transverse colon cancer. We evaluated prospectively collected data of 30 consecutive patients who underwent laparoscopic complete mesocolic excision between January 2010 and December 2015, 6 of whom we excluded, leaving 24 for the analysis. We assessed the completeness of excision, operative data, pathological findings, length of large bowel resected, complications, length of hospital stay, and oncological outcomes. Complete mesocolic excision completeness was graded as the mesocolic and intramesocolic planes in 21 and 3 patients, respectively. Eleven, two, eight, and three patients had T1, T2, T3, and T4a tumors, respectively; none had lymph node metastases. A mean of 18.3 lymph nodes was retrieved, and a mean of 5.4 lymph nodes was retrieved around the origin of the MCV. The mean large bowel length was 21.9 cm, operative time 274 min, intraoperative blood loss 41 mL, and length of hospital stay 15 days. There were no intraoperative and two postoperative complications. Our procedure for laparoscopic complete mesocolic excision via combined medial and cranial approaches is safe and feasible for transverse colon cancer.

  8. SPECT reconstruction of combined cone beam and parallel hole collimation with experimental data

    International Nuclear Information System (INIS)

    Li, Jianying; Jaszczak, R.J.; Turkington, T.G.; Greer, K.L.; Coleman, R.E.

    1993-01-01

    The authors have developed three methods to combine parallel and cone bean (P and CB) SPECT data using modified Maximum Likelihood-Expectation Maximization (ML-EM) algorithms. The first combination method applies both parallel and cone beam data sets to reconstruct a single intermediate image after each iteration using the ML-EM algorithm. The other two iterative methods combine the intermediate parallel beam (PB) and cone beam (CB) source estimates to enhance the uniformity of images. These two methods are ad hoc methods. In earlier studies using computer Monte Carlo simulation, they suggested that improved images might be obtained by reconstructing combined P and CB SPECT data. These combined collimation methods are qualitatively evaluated using experimental data. An attenuation compensation is performed by including the effects of attenuation in the transition matrix as a multiplicative factor. The combined P and CB images are compared with CB-only images and the result indicate that the combined P and CB approaches suppress artifacts caused by truncated projections and correct for the distortions of the CB-only images

  9. A combined triggering-propagation modeling approach for the assessment of rainfall induced debris flow susceptibility

    Science.gov (United States)

    Stancanelli, Laura Maria; Peres, David Johnny; Cancelliere, Antonino; Foti, Enrico

    2017-07-01

    Rainfall-induced shallow slides can evolve into debris flows that move rapidly downstream with devastating consequences. Mapping the susceptibility to debris flow is an important aid for risk mitigation. We propose a novel practical approach to derive debris flow inundation maps useful for susceptibility assessment, that is based on the integrated use of DEM-based spatially-distributed hydrological and slope stability models with debris flow propagation models. More specifically, the TRIGRS infiltration and infinite slope stability model and the FLO-2D model for the simulation of the related debris flow propagation and deposition are combined. An empirical instability-to-debris flow triggering threshold calibrated on the basis of observed events, is applied to link the two models and to accomplish the task of determining the amount of unstable mass that develops as a debris flow. Calibration of the proposed methodology is carried out based on real data of the debris flow event occurred on 1 October 2009, in the Peloritani mountains area (Italy). Model performance, assessed by receiver-operating-characteristics (ROC) indexes, evidences fairly good reproduction of the observed event. Comparison with the performance of the traditional debris flow modeling procedure, in which sediment and water hydrographs are inputed as lumped at selected points on top of the streams, is also performed, in order to assess quantitatively the limitations of such commonly applied approach. Results show that the proposed method, besides of being more process-consistent than the traditional hydrograph-based approach, can potentially provide a more accurate simulation of debris-flow phenomena, in terms of spatial patterns of erosion and deposition as well on the quantification of mobilized volumes and depths, avoiding overestimation of debris flow triggering volume and, thus, of maximum inundation flow depths.

  10. Power-generation method using combined gas and steam turbines

    Energy Technology Data Exchange (ETDEWEB)

    Liu, C; Radtke, K; Keller, H J

    1997-03-20

    The invention concerns a method of power generation using a so-called COGAS (combined gas and steam) turbine installation, the aim being to improve the method with regard to the initial costs and energy consumption so that power can be generated as cheaply as possible. This is achieved by virtue of the fact that air taken from the surrounding atmosphere is splint into an essentially oxygen-containing stream and an essentially nitrogen-containing stream and the two streams fed further at approximately atmospheric pressure. The essentially nitrogen-containing stream is mixed with an air stream to form a mixed nitrogen/air stream and the mixed-gas stream thus produced is brought to combustion chamber pressure in the compressor of the gas turbine, the combustion of the combustion gases in the combustion chamber of the gas turbine being carried out with the greater part of this compressed mixed-gas stream. (author) figs.

  11. Raster-based outranking method: a new approach for municipal solid waste landfill (MSW) siting.

    Science.gov (United States)

    Hamzeh, Mohamad; Abbaspour, Rahim Ali; Davalou, Romina

    2015-08-01

    MSW landfill siting is a complicated process because it requires integration of several factors. In this paper, geographic information system (GIS) and multiple criteria decision analysis (MCDA) were combined to handle the municipal solid waste (MSW) landfill siting. For this purpose, first, 16 input data layers were prepared in GIS environment. Then, the exclusionary lands were eliminated and potentially suitable areas for the MSW disposal were identified. These potentially suitable areas, in an innovative approach, were further examined by deploying Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE) II and analytic network process (ANP), which are two of the most recent MCDA methods, in order to determine land suitability for landfilling. PROMETHEE II was used to determine a complete ranking of the alternatives, while ANP was employed to quantify the subjective judgments of evaluators as criteria weights. The resulting land suitability was reported on a grading scale of 1-5 from 1 to 5, which is the least to the most suitable area, respectively. Finally, three optimal sites were selected by taking into consideration the local conditions of 15 sites, which were candidates for MSW landfilling. Research findings show that the raster-based method yields effective results.

  12. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  13. Approaches of data combining for reliability assessments with taking into account the priority of data application

    International Nuclear Information System (INIS)

    Zelenyj, O.V.; Pecheritsa, A.V.

    2004-01-01

    Based upon the available experience on assessments of risk from Ukrainian NPP's operational events as well as on results of State review of PSA studies for pilot units it should be noted that historical information on domestic NPP's operation is not always available or used properly under implementation of mentioned activities. The several approaches for combining of available generic and specific information for reliability parameters assessment (taking into account the priority of data application) are briefly described in the article along with some recommendations how to apply those approaches

  14. Cleaner Production and Workplace Health and Safety: A combined approach. A case study from South Africa

    DEFF Research Database (Denmark)

    Hedlund, Frank Huess

    Environmental goals may be pursued narrow-mindedly with no attention paid to the workplace. This book examines combined approaches in cleaner production projects. It explores two main avenues. First, integration into the project specification. The planning tools in use by assistance agencies......, integration of management systems is an option. A study on the South African Nosa 5-Star system refutes earlier criticism of dismal performance of top-down systems. It is argued that integration at this level is viable. For small companies, less formalistic approaches are required. ILO's network concept WISE...

  15. Measuring combined exposure to environmental pressures in urban areas: an air quality and noise pollution assessment approach.

    Science.gov (United States)

    Vlachokostas, Ch; Achillas, Ch; Michailidou, A V; Moussiopoulos, Nu

    2012-02-01

    This study presents a methodological scheme developed to provide a combined air and noise pollution exposure assessment based on measurements from personal portable monitors. Provided that air and noise pollution are considered in a co-exposure approach, they represent a significant environmental hazard to public health. The methodology is demonstrated for the city of Thessaloniki, Greece. The results of an extensive field campaign are presented and the variations in personal exposure between modes of transport, routes, streets and transport microenvironments are evaluated. Air pollution and noise measurements were performed simultaneously along several commuting routes, during the morning and evening rush hours. Combined exposure to environmental pollutants is highlighted based on the Combined Exposure Factor (CEF) and Combined Dose and Exposure Factor (CDEF). The CDEF takes into account the potential relative uptake of each pollutant by considering the physical activities of each citizen. Rather than viewing environmental pollutants separately for planning and environmental sustainability considerations, the possibility of an easy-to-comprehend co-exposure approach based on these two indices is demonstrated. Furthermore, they provide for the first time a combined exposure assessment to these environmental pollutants for Thessaloniki and in this sense they could be of importance for local public authorities and decision makers. A considerable environmental burden for the citizens of Thessaloniki, especially for VOCs and noise pollution levels is observed. The material herein points out the importance of measuring public health stressors and the necessity of considering urban environmental pollution in a holistic way. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Prediction of Protein–Protein Interactions by Evidence Combining Methods

    Directory of Open Access Journals (Sweden)

    Ji-Wei Chang

    2016-11-01

    Full Text Available Most cellular functions involve proteins’ features based on their physical interactions with other partner proteins. Sketching a map of protein–protein interactions (PPIs is therefore an important inception step towards understanding the basics of cell functions. Several experimental techniques operating in vivo or in vitro have made significant contributions to screening a large number of protein interaction partners, especially high-throughput experimental methods. However, computational approaches for PPI predication supported by rapid accumulation of data generated from experimental techniques, 3D structure definitions, and genome sequencing have boosted the map sketching of PPIs. In this review, we shed light on in silico PPI prediction methods that integrate evidence from multiple sources, including evolutionary relationship, function annotation, sequence/structure features, network topology and text mining. These methods are developed for integration of multi-dimensional evidence, for designing the strategies to predict novel interactions, and for making the results consistent with the increase of prediction coverage and accuracy.

  17. Qualitative approaches to use of the RE-AIM framework: rationale and methods.

    Science.gov (United States)

    Holtrop, Jodi Summers; Rabin, Borsika A; Glasgow, Russell E

    2018-03-13

    There have been over 430 publications using the RE-AIM model for planning and evaluation of health programs and policies, as well as numerous applications of the model in grant proposals and national programs. Full use of the model includes use of qualitative methods to understand why and how results were obtained on different RE-AIM dimensions, however, recent reviews have revealed that qualitative methods have been used infrequently. Having quantitative and qualitative methods and results iteratively inform each other should enhance understanding and lessons learned. Because there have been few published examples of qualitative approaches and methods using RE-AIM for planning or assessment and no guidance on how qualitative approaches can inform these processes, we provide guidance on qualitative methods to address the RE-AIM model and its various dimensions. The intended audience is researchers interested in applying RE-AIM or similar implementation models, but the methods discussed should also be relevant to those in community or clinical settings. We present directions for, examples of, and guidance on how qualitative methods can be used to address each of the five RE-AIM dimensions. Formative qualitative methods can be helpful in planning interventions and designing for dissemination. Summative qualitative methods are useful when used in an iterative, mixed methods approach for understanding how and why different patterns of results occur. In summary, qualitative and mixed methods approaches to RE-AIM help understand complex situations and results, why and how outcomes were obtained, and contextual factors not easily assessed using quantitative measures.

  18. A proposal for combining mapping, localization and target recognition

    Science.gov (United States)

    Grönwall, Christina; Hendeby, Gustaf; Sinivaara, Kristian

    2015-10-01

    Simultaneous localization and mapping (SLAM) is a well-known positioning approach in GPS-denied environments such as urban canyons and inside buildings. Autonomous/aided target detection and recognition (ATR) is commonly used in military application to detect threats and targets in outdoor environments. This papers present approaches to combine SLAM with ATR in ways that compensate for the drawbacks in each method. The methods use physical objects that are recognizable by ATR as unambiguous features in SLAM, while SLAM provides the ATR with better position estimates. Landmarks in the form of 3D point features based on normal aligned radial features (NARF) are used in conjunction with identified objects and 3D object models that replace landmarks when possible. This leads to a more compact map representation with fewer landmarks, which partly compensates for the introduced cost of the ATR. We analyze three approaches to combine SLAM and 3D-data; point-point matching ignoring NARF features, point-point matching using the set of points that are selected by NARF feature analysis, and matching of NARF features using nearest neighbor analysis. The first two approaches are is similar to the common iterative closest point (ICP). We propose an algorithm that combines EKF-SLAM and ATR based on rectangle estimation. The intended application is to improve the positioning of a first responder moving through an indoor environment, where the map offers localization and simultaneously helps locate people, furniture and potentially dangerous objects such as gas canisters.

  19. Treatment of premature ejaculation: a new combined approach

    Directory of Open Access Journals (Sweden)

    Adel Kurkar

    2015-01-01

    Causes of PE differ considerably. In this paper, we compared the outcomes of two single treatment lines together with a combination of both. The combination therapy was more effective than either line alone.

  20. Combined analgesics in (headache pain therapy: shotgun approach or precise multi-target therapeutics?

    Directory of Open Access Journals (Sweden)

    Fiebich Bernd L

    2011-03-01

    Full Text Available Abstract Background Pain in general and headache in particular are characterized by a change in activity in brain areas involved in pain processing. The therapeutic challenge is to identify drugs with molecular targets that restore the healthy state, resulting in meaningful pain relief or even freedom from pain. Different aspects of pain perception, i.e. sensory and affective components, also explain why there is not just one single target structure for therapeutic approaches to pain. A network of brain areas ("pain matrix" are involved in pain perception and pain control. This diversification of the pain system explains why a wide range of molecularly different substances can be used in the treatment of different pain states and why in recent years more and more studies have described a superior efficacy of a precise multi-target combination therapy compared to therapy with monotherapeutics. Discussion In this article, we discuss the available literature on the effects of several fixed-dose combinations in the treatment of headaches and discuss the evidence in support of the role of combination therapy in the pharmacotherapy of pain, particularly of headaches. The scientific rationale behind multi-target combinations is the therapeutic benefit that could not be achieved by the individual constituents and that the single substances of the combinations act together additively or even multiplicatively and cooperate to achieve a completeness of the desired therapeutic effect. As an example the fixesd-dose combination of acetylsalicylic acid (ASA, paracetamol (acetaminophen and caffeine is reviewed in detail. The major advantage of using such a fixed combination is that the active ingredients act on different but distinct molecular targets and thus are able to act on more signalling cascades involved in pain than most single analgesics without adding more side effects to the therapy. Summary Multitarget therapeutics like combined analgesics broaden

  1. Combined analgesics in (headache) pain therapy: shotgun approach or precise multi-target therapeutics?

    Science.gov (United States)

    2011-01-01

    Background Pain in general and headache in particular are characterized by a change in activity in brain areas involved in pain processing. The therapeutic challenge is to identify drugs with molecular targets that restore the healthy state, resulting in meaningful pain relief or even freedom from pain. Different aspects of pain perception, i.e. sensory and affective components, also explain why there is not just one single target structure for therapeutic approaches to pain. A network of brain areas ("pain matrix") are involved in pain perception and pain control. This diversification of the pain system explains why a wide range of molecularly different substances can be used in the treatment of different pain states and why in recent years more and more studies have described a superior efficacy of a precise multi-target combination therapy compared to therapy with monotherapeutics. Discussion In this article, we discuss the available literature on the effects of several fixed-dose combinations in the treatment of headaches and discuss the evidence in support of the role of combination therapy in the pharmacotherapy of pain, particularly of headaches. The scientific rationale behind multi-target combinations is the therapeutic benefit that could not be achieved by the individual constituents and that the single substances of the combinations act together additively or even multiplicatively and cooperate to achieve a completeness of the desired therapeutic effect. As an example the fixesd-dose combination of acetylsalicylic acid (ASA), paracetamol (acetaminophen) and caffeine is reviewed in detail. The major advantage of using such a fixed combination is that the active ingredients act on different but distinct molecular targets and thus are able to act on more signalling cascades involved in pain than most single analgesics without adding more side effects to the therapy. Summary Multitarget therapeutics like combined analgesics broaden the array of therapeutic

  2. Combined analgesics in (headache) pain therapy: shotgun approach or precise multi-target therapeutics?

    Science.gov (United States)

    Straube, Andreas; Aicher, Bernhard; Fiebich, Bernd L; Haag, Gunther

    2011-03-31

    Pain in general and headache in particular are characterized by a change in activity in brain areas involved in pain processing. The therapeutic challenge is to identify drugs with molecular targets that restore the healthy state, resulting in meaningful pain relief or even freedom from pain. Different aspects of pain perception, i.e. sensory and affective components, also explain why there is not just one single target structure for therapeutic approaches to pain. A network of brain areas ("pain matrix") are involved in pain perception and pain control. This diversification of the pain system explains why a wide range of molecularly different substances can be used in the treatment of different pain states and why in recent years more and more studies have described a superior efficacy of a precise multi-target combination therapy compared to therapy with monotherapeutics. In this article, we discuss the available literature on the effects of several fixed-dose combinations in the treatment of headaches and discuss the evidence in support of the role of combination therapy in the pharmacotherapy of pain, particularly of headaches. The scientific rationale behind multi-target combinations is the therapeutic benefit that could not be achieved by the individual constituents and that the single substances of the combinations act together additively or even multiplicatively and cooperate to achieve a completeness of the desired therapeutic effect.As an example the fixed-dose combination of acetylsalicylic acid (ASA), paracetamol (acetaminophen) and caffeine is reviewed in detail. The major advantage of using such a fixed combination is that the active ingredients act on different but distinct molecular targets and thus are able to act on more signalling cascades involved in pain than most single analgesics without adding more side effects to the therapy. Multitarget therapeutics like combined analgesics broaden the array of therapeutic options, enable the completeness

  3. Combination Drug Delivery Approaches in Metastatic Breast Cancer

    Directory of Open Access Journals (Sweden)

    Jun H. Lee

    2012-01-01

    Full Text Available Disseminated metastatic breast cancer needs aggressive treatment due to its reduced response to anticancer treatment and hence low survival and quality of life. Although in theory a combination drug therapy has advantages over single-agent therapy, no appreciable survival enhancement is generally reported whereas increased toxicity is frequently seen in combination treatment especially in chemotherapy. Currently used combination treatments in metastatic breast cancer will be discussed with their challenges leading to the introduction of novel combination anticancer drug delivery systems that aim to overcome these challenges. Widely studied drug delivery systems such as liposomes, dendrimers, polymeric nanoparticles, and water-soluble polymers can concurrently carry multiple anticancer drugs in one platform. These carriers can provide improved target specificity achieved by passive and/or active targeting mechanisms.

  4. A combined HM-PCR/SNuPE method for high sensitive detection of rare DNA methylation

    Directory of Open Access Journals (Sweden)

    Tierling Sascha

    2010-06-01

    Full Text Available Abstract Background DNA methylation changes are widely used as early molecular markers in cancer detection. Sensitive detection and classification of rare methylation changes in DNA extracted from circulating body fluids or complex tissue samples is crucial for the understanding of tumor etiology, clinical diagnosis and treatment. In this paper, we describe a combined method to monitor the presence of methylated tumor DNA in an excess of unmethylated background DNA of non-tumorous cells. The method combines heavy methyl-PCR, which favors preferential amplification of methylated marker sequence from bisulfite-treated DNA with a methylation-specific single nucleotide primer extension monitored by ion-pair, reversed-phase, high-performance liquid chromatography separation. Results This combined method allows detection of 14 pg (that is, four to five genomic copies of methylated chromosomal DNA in a 2000-fold excess (that is, 50 ng of unmethylated chromosomal background, with an analytical sensitivity of > 90%. We outline a detailed protocol for the combined assay on two examples of known cancer markers (SEPT9 and TMEFF2 and discuss general aspects of assay design and data interpretation. Finally, we provide an application example for rapid testing on tumor methylation in plasma DNA derived from a small cohort of patients with colorectal cancer. Conclusion The method allows unambiguous detection of rare DNA methylation, for example in body fluid or DNA isolates from cells or tissues, with very high sensitivity and accuracy. The application combines standard technologies and can easily be adapted to any target region of interest. It does not require costly reagents and can be used for routine screening of many samples.

  5. A Combined Two-Method MPPT Control Scheme for Grid-Connected Photovoltaic Systems

    DEFF Research Database (Denmark)

    Dorofte, Christinel; Borup, Uffe; Blaabjerg, Frede

    2005-01-01

    In order to increase the output efficiency of a grid-connected photovoltaic (PV) system it is important to have an efficient Maximum Power Point Tracker (MPPT). In the case of low irradiation, the Perturb and Observe (PO) and Incremental Conductance (IC) methods have a poor efficiency, because...... of the poor resolution in the acquired signals, when a fixed point implementation is done. A cost-effective two-method MPPT control scheme is proposed in this paper to track the maximum power point (MPP) at both low and high irradiation, by combining a Constant Voltage (CV) method and modified PO algorithm...

  6. Actively Teaching Research Methods with a Process Oriented Guided Inquiry Learning Approach

    Science.gov (United States)

    Mullins, Mary H.

    2017-01-01

    Active learning approaches have shown to improve student learning outcomes and improve the experience of students in the classroom. This article compares a Process Oriented Guided Inquiry Learning style approach to a more traditional teaching method in an undergraduate research methods course. Moving from a more traditional learning environment to…

  7. A combination of transformation optics and surface impedance modulation to design compact retrodirective reflectors

    Directory of Open Access Journals (Sweden)

    H. Haddad

    2018-02-01

    Full Text Available This study proposes a new approach to flatten retrodirective corner reflectors. The proposed method enables compact reflectors via Transformation Optics (TO combined with Surface Impedance Modulation (SIM. This combination permits to relax the constraints on the anisotropic material resulting from the TO. Phase gradient approach is generalized to be used within anisotropic media and is implemented with SIM. Different reflector setups are designed, simulated and compared for fop = 8GHz using ANSYS® HFSS® in order to validate the use of such a combination.

  8. Comparison of algae cultivation methods for bioenergy production using a combined life cycle assessment and life cycle costing approach.

    Science.gov (United States)

    Resurreccion, Eleazer P; Colosi, Lisa M; White, Mark A; Clarens, Andres F

    2012-12-01

    Algae are an attractive energy source, but important questions still exist about the sustainability of this technology on a large scale. Two particularly important questions concern the method of cultivation and the type of algae to be used. This present study combines elements of life cycle analysis (LCA) and life cycle costing (LCC) to evaluate open pond (OP) systems and horizontal tubular photobioreactors (PBRs) for the cultivation of freshwater (FW) or brackish-to-saline water (BSW) algae. Based on the LCA, OPs have lower energy consumption and greenhouse gas emissions than PBRs; e.g., 32% less energy use for construction and operation. According to the LCC, all four systems are currently financially unattractive investments, though OPs are less so than PBRs. BSW species deliver better energy and GHG performance and higher profitability than FW species in both OPs and PBRs. Sensitivity analyses suggest that improvements in critical cultivation parameters (e.g., CO(2) utilization efficiency or algae lipid content), conversion parameters (e.g., anaerobic digestion efficiency), and market factors (e.g., costs of CO(2) and electricity, or sale prices for algae biodiesel) could alter these results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Novel approach in quantitative analysis of shearography method

    International Nuclear Information System (INIS)

    Wan Saffiey Wan Abdullah

    2002-01-01

    The application of laser interferometry in industrial non-destructive testing and material characterization is becoming more prevalent since this method provides non-contact full-field inspection of the test object. However their application only limited to the qualitative analysis, current trend has changed to the development of this method by the introduction of quantitative analysis, which attempts to detail the defect examined. This being the design feature for a ranges of object size to be examined. The growing commercial demand for quantitative analysis for NDT and material characterization is determining the quality of optical and analysis instrument. However very little attention is currently being paid to understanding, quantifying and compensating for the numerous error sources which are a function of interferometers. This paper presents a comparison of measurement analysis using the established theoretical approach and the new approach, taken into account the factor of divergence illumination and other geometrical factors. The difference in the measurement system could be associated in the error factor. (Author)

  10. A vulnerability driven approach to identify adverse climate and land use change combinations for critical hydrologic indicator thresholds: Application to a watershed in Pennsylvania, USA

    Science.gov (United States)

    Singh, R.; Wagener, T.; Crane, R.; Mann, M. E.; Ning, L.

    2014-04-01

    Large uncertainties in streamflow projections derived from downscaled climate projections of precipitation and temperature can render such simulations of limited value for decision making in the context of water resources management. New approaches are being sought to provide decision makers with robust information in the face of such large uncertainties. We present an alternative approach that starts with the stakeholder's definition of vulnerable ranges for relevant hydrologic indicators. Then the modeled system is analyzed to assess under what conditions these thresholds are exceeded. The space of possible climates and land use combinations for a watershed is explored to isolate subspaces that lead to vulnerability, while considering model parameter uncertainty in the analysis. We implement this concept using classification and regression trees (CART) that separate the input space of climate and land use change into those combinations that lead to vulnerability and those that do not. We test our method in a Pennsylvania watershed for nine ecological and water resources related streamflow indicators for which an increase in temperature between 3°C and 6°C and change in precipitation between -17% and 19% is projected. Our approach provides several new insights, for example, we show that even small decreases in precipitation (˜5%) combined with temperature increases greater than 2.5°C can push the mean annual runoff into a slightly vulnerable regime. Using this impact and stakeholder driven strategy, we explore the decision-relevant space more fully and provide information to the decision maker even if climate change projections are ambiguous.

  11. An empirical comparison of different approaches for combining multimodal neuroimaging data with Support Vector Machine

    Directory of Open Access Journals (Sweden)

    William ePettersson-Yeo

    2014-07-01

    Full Text Available In the pursuit of clinical utility, neuroimaging researchers of psychiatric and neurological illness are increasingly using analyses, such as support vector machine (SVM, that allow inference at the single-subject level. Recent studies employing single-modality data, however, suggest that classification accuracies must be improved for such utility to be realised. One possible solution is to integrate different data types to provide a single combined output classification; either by generating a single decision function based on an integrated kernel matrix, or, by creating an ensemble of multiple single modality classifiers and integrating their predictions. Here, we describe four integrative approaches: 1 an un-weighted sum of kernels, 2 multi-kernel learning, 3 prediction averaging, and 4 majority voting, and compare their ability to enhance classification accuracy relative to the best single-modality classification accuracy. We achieve this by integrating structural, functional and diffusion tensor magnetic resonance imaging data, in order to compare ultra-high risk (UHR; n=19, first episode psychosis (FEP; n=19 and healthy control subjects (HCs; n=19. Our results show that i whilst integration can enhance classification accuracy by up to 13%, the frequency of such instances may be limited, ii where classification can be enhanced, simple methods may yield greater increases relative to more computationally complex alternatives, and, iii the potential for classification enhancement is highly influenced by the specific diagnostic comparison under consideration. In conclusion, our findings suggest that for moderately sized clinical neuroimaging datasets, combining different imaging modalities in a data-driven manner is no magic bullet for increasing classification accuracy.

  12. An integrated approach for solving a MCDM problem, Combination of Entropy Fuzzy and F-PROMETHEE techniques

    Directory of Open Access Journals (Sweden)

    Amin Shahmardan

    2013-09-01

    Full Text Available Purpose: The intention of this paper is the presentation of a new integrated approach for solving a multi attribute decision making problem by the use of Entropy Fuzzy and F- PROMETHEE (fuzzy preference ranking method for enrichment evaluation techniques. Design/methodology/approach: In these sorts of multi attribute decision making problem, a number of criteria and alternatives are put forward as input data. Ranking of these alternatives according to mentioned criteria is regarded as the outcome of solving these kinds of problems. Initially, weights of criteria are determined by implementation of Entropy Fuzzy method. According to determined weights, F-PROMETHEE method is exerted to rank these alternatives in terms of desirability of DM (decision maker. Findings: Being in an uncertain environment and vagueness of DM’s judgments, lead us to implement an algorithm which can deal with these constraints properly. This technique namely called Entropy Fuzzy as a weighting method and F-PROMETHEE is performed to fulfill this approach more precisely according to tangible and intangible aspects. The main finding of applied approach is the final ranking of alternatives helping DM to have a more reliable decision. Originality/Value: The main contribution of this approach is the giving real significance to DM’s attitudes about mentioned criteria in determined alternatives which is not elucidate in former approaches like Analytical Hierarchy Process (AHP. Furthermore, previous methods like Shanon Entropy do not pay attention sufficiently to satisfaction degree of each criterion in proposed alternatives, regarding to DM’s statements. Comprehensive explanations about these procedures have been made in miscellaneous sections of this article.

  13. Fast method to compute scattering by a buried object under a randomly rough surface: PILE combined with FB-SA.

    Science.gov (United States)

    Bourlier, Christophe; Kubické, Gildas; Déchamps, Nicolas

    2008-04-01

    A fast, exact numerical method based on the method of moments (MM) is developed to calculate the scattering from an object below a randomly rough surface. Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)] have recently developed the PILE (propagation-inside-layer expansion) method for a stack of two one-dimensional rough interfaces separating homogeneous media. From the inversion of the impedance matrix by block (in which two impedance matrices of each interface and two coupling matrices are involved), this method allows one to calculate separately and exactly the multiple-scattering contributions inside the layer in which the inverses of the impedance matrices of each interface are involved. Our purpose here is to apply this method for an object below a rough surface. In addition, to invert a matrix of large size, the forward-backward spectral acceleration (FB-SA) approach of complexity O(N) (N is the number of unknowns on the interface) proposed by Chou and Johnson [Radio Sci.33, 1277 (1998)] is applied. The new method, PILE combined with FB-SA, is tested on perfectly conducting circular and elliptic cylinders located below a dielectric rough interface obeying a Gaussian process with Gaussian and exponential height autocorrelation functions.

  14. On the bioavailability of trace metals in surface sediments: a combined geochemical and biological approach.

    Science.gov (United States)

    Roosa, Stéphanie; Prygiel, Emilie; Lesven, Ludovic; Wattiez, Ruddy; Gillan, David; Ferrari, Benoît J D; Criquet, Justine; Billon, Gabriel

    2016-06-01

    The bioavailability of metals was estimated in three river sediments (Sensée, Scarpe, and Deûle Rivers) impacted by different levels of Cu, Cd, Pb, and Zn (Northern France). For that, a combination of geochemistry and biological responses (bacteria and chironomids) was used. The results obtained illustrate the complexity of the notion of "bioavailability." Indeed, geochemical indexes suggested a low toxicity, even in surface sediments with high concentrations of total metals and a predicted severe effect levels for the organisms. This was also suggested by the abundance of total bacteria as determined by DAPI counts, with high bacterial cell numbers even in contaminated areas. However, a fraction of metals may be bioavailable as it was shown for chironomid larvae which were able to accumulate an important quantity of metals in surface sediments within just a few days.We concluded that (1) the best approach to estimate bioavailability in the selected sediments is a combination of geochemical and biological approaches and that (2) the sediments in the Deûle and Scarpe Rivers are highly contaminated and may impact bacterial populations but also benthic invertebrates.

  15. Large central lesions compressing the hypothalamus and brainstem. Operative approaches and combination treatment with radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, Hiroshi K.; Negishi, Masatoshi; Kohga, Hideaki; Hirato, Masafumi; Ohye, Chihiro [Gunma Univ., Maebashi (Japan). School of Medicine; Shibazaki, Tohru

    1998-09-01

    A major aim of minimally invasive neurosurgery is to preserve function in the brain and cranial nerves. Based on previous results of radiosurgery for central lesions (19 craniopharyngiomas, 46 pituitary adenomas, 9 meningeal tumors), combined micro- and/or radiosurgery was applied for large lesions compressing the hypothalamus and/or brainstem. A basal interhemispheric approach via superomedial orbitotomy or a transcallosal-transforaminal approach was used for these large tumors. Tumors left behind in the hypothalamus or cavernous sinus were treated with radiosurgery using a gamma unit. Preoperative hypothalamo-pituitary functions were preserved in most of these patients. Radiosurgical results were evaluated in patients followed for more than 2 years after treatment. All 9 craniopharyngiomas decreased in size after radiosurgery, although a second treatment was required in 4 patients. All 20 pituitary adenomas were stable or decreased in size and 5 of 7 functioning adenomas showed normalized values of hormones in the serum. All 3 meningeal tumors were stable or decreased in size after treatment. No cavernous sinus symptoms developed after radiosurgery. We conclude that combined micro- and radio-neurosurgery is an effective and less invasive treatment for large central lesions compressing the hypothalamus and brainstem. (author)

  16. Large central lesions compressing the hypothalamus and brainstem. Operative approaches and combination treatment with radiosurgery

    International Nuclear Information System (INIS)

    Inoue, Hiroshi K.; Negishi, Masatoshi; Kohga, Hideaki; Hirato, Masafumi; Ohye, Chihiro; Shibazaki, Tohru

    1998-01-01

    A major aim of minimally invasive neurosurgery is to preserve function in the brain and cranial nerves. Based on previous results of radiosurgery for central lesions (19 craniopharyngiomas, 46 pituitary adenomas, 9 meningeal tumors), combined micro- and/or radiosurgery was applied for large lesions compressing the hypothalamus and/or brainstem. A basal interhemispheric approach via superomedial orbitotomy or a transcallosal-transforaminal approach was used for these large tumors. Tumors left behind in the hypothalamus or cavernous sinus were treated with radiosurgery using a gamma unit. Preoperative hypothalamo-pituitary functions were preserved in most of these patients. Radiosurgical results were evaluated in patients followed for more than 2 years after treatment. All 9 craniopharyngiomas decreased in size after radiosurgery, although a second treatment was required in 4 patients. All 20 pituitary adenomas were stable or decreased in size and 5 of 7 functioning adenomas showed normalized values of hormones in the serum. All 3 meningeal tumors were stable or decreased in size after treatment. No cavernous sinus symptoms developed after radiosurgery. We conclude that combined micro- and radio-neurosurgery is an effective and less invasive treatment for large central lesions compressing the hypothalamus and brainstem. (author)

  17. Comparison of Early Outcomes with Three Approaches for Combined Coronary Revascularization and Carotid Endarterectomy

    Directory of Open Access Journals (Sweden)

    Arzu Antal Dönmez

    Full Text Available Abstract Objective: This study aims to compare three different surgical approaches for combined coronary and carotid artery stenosis as a single stage procedure and to assess effect of operative strategy on mortality and neurological complications. Methods: This retrospective study involves 136 patients who had synchronous coronary artery revascularization and carotid endarterectomy in our institution, between January 2002 and December 2012. Patients were divided into 3 groups according to the surgical technique used. Group I included 70 patients who had carotid endarterectomy, followed by coronary revascularization with on-pump technique, group II included 29 patients who had carotid endarterectomy, followed by coronary revascularization with off-pump technique, group III included 37 patients who had coronary revascularization with on-pump technique followed by carotid endarterectomy under aortic cross-clamp and systemic hypothermia (22-27ºC. Postoperative outcomes were evaluated. Results: Overall early mortality and stroke rate was 5.1% for both. There were 3 (4.3% deaths in group I, 2 (6.9% deaths in group II and 2 (5.4% deaths in group III. Stroke was observed in 5 (7.1% patients in group I and 2 (6.9% in group II. Stroke was not observed in group III. No statistically significant difference was observed for mortality and stroke rates among the groups. Conclusion: We identified no significant difference in mortality or neurologic complications among three approaches for synchronous surgery for coronary and carotid disease. Therefore it is impossible to conclude that a single principle might be adapted into standard practice. Patient specific risk factors and clinical conditions might be important in determining the surgical tecnnique.

  18. A New Image Encryption Technique Combining Hill Cipher Method, Morse Code and Least Significant Bit Algorithm

    Science.gov (United States)

    Nofriansyah, Dicky; Defit, Sarjon; Nurcahyo, Gunadi W.; Ganefri, G.; Ridwan, R.; Saleh Ahmar, Ansari; Rahim, Robbi

    2018-01-01

    Cybercrime is one of the most serious threats. Efforts are made to reduce the number of cybercrime is to find new techniques in securing data such as Cryptography, Steganography and Watermarking combination. Cryptography and Steganography is a growing data security science. A combination of Cryptography and Steganography is one effort to improve data integrity. New techniques are used by combining several algorithms, one of which is the incorporation of hill cipher method and Morse code. Morse code is one of the communication codes used in the Scouting field. This code consists of dots and lines. This is a new modern and classic concept to maintain data integrity. The result of the combination of these three methods is expected to generate new algorithms to improve the security of the data, especially images.

  19. A Combined Self-Consistent Method to Estimate the Effective Properties of Polypropylene/Calcium Carbonate Composites

    Directory of Open Access Journals (Sweden)

    Zhongqiang Xiong

    2018-01-01

    Full Text Available In this work, trying to avoid difficulty of application due to the irregular filler shapes in experiments, self-consistent and differential self-consistent methods were combined to obtain a decoupled equation. The combined method suggests a tenor γ independent of filler-contents being an important connection between high and low filler-contents. On one hand, the constant parameter can be calculated by Eshelby’s inclusion theory or the Mori–Tanaka method to predict effective properties of composites coinciding with its hypothesis. On the other hand, the parameter can be calculated with several experimental results to estimate the effective properties of prepared composites of other different contents. In addition, an evaluation index σ f ′ of the interactional strength between matrix and fillers is proposed based on experiments. In experiments, a hyper-dispersant was synthesized to prepare polypropylene/calcium carbonate (PP/CaCO3 composites up to 70 wt % of filler-content with dispersion, whose dosage was only 5 wt % of the CaCO3 contents. Based on several verifications, it is hoped that the combined self-consistent method is valid for other two-phase composites in experiments with the same application progress as in this work.

  20. Improvements of marine clay slurries using chemical–physical combined method (CPCM

    Directory of Open Access Journals (Sweden)

    Dongqing Wu

    2015-04-01

    Full Text Available In this paper, the effectiveness, applicability and validity of chemical–physical combined methods (CPCMs for treatment of marine clay (MC slurries were evaluated. The method CPCM1 combines chemical stabilization and vacuum preloading (VP, while CPCM2 is similar to CPCM1 but includes both the application of surcharge and use of geo-bags to provide confinement during surcharge preloading. The key advantage of CPCM2 using geo-bags is that the surcharge can be immediately applied on the chemically stabilized slurries. Two types of geo-bags were investigated under simulated land filling and dyke conditions, respectively. The test results show that the shear strength (cu of treated slurry by CPCM2 is generally much higher than that by CPCM1. Besides, the use of CPCM2 can significantly reduce the treatment time due to the short drainage paths created by geo-bags. Overall, CPCM2 allows faster consolidation and higher preloading that help to achieve higher mechanical properties of the stabilized slurry. There are consistent relationships between cU and water content of slurries treated by CPCM2. Several important observations were also made based on comparisons of experimental data.

  1. Research design: qualitative, quantitative and mixed methods approaches Research design: qualitative, quantitative and mixed methods approaches Creswell John W Sage 320 £29 0761924426 0761924426 [Formula: see text].

    Science.gov (United States)

    2004-09-01

    The second edition of Creswell's book has been significantly revised and updated. The author clearly sets out three approaches to research: quantitative, qualitative and mixed methods. As someone who has used mixed methods in my research, it is refreshing to read a textbook that addresses this. The differences between the approaches are clearly identified and a rationale for using each methodological stance provided.

  2. The MIXED framework: A novel approach to evaluating mixed-methods rigor.

    Science.gov (United States)

    Eckhardt, Ann L; DeVon, Holli A

    2017-10-01

    Evaluation of rigor in mixed-methods (MM) research is a persistent challenge due to the combination of inconsistent philosophical paradigms, the use of multiple research methods which require different skill sets, and the need to combine research at different points in the research process. Researchers have proposed a variety of ways to thoroughly evaluate MM research, but each method fails to provide a framework that is useful for the consumer of research. In contrast, the MIXED framework is meant to bridge the gap between an academic exercise and practical assessment of a published work. The MIXED framework (methods, inference, expertise, evaluation, and design) borrows from previously published frameworks to create a useful tool for the evaluation of a published study. The MIXED framework uses an experimental eight-item scale that allows for comprehensive integrated assessment of MM rigor in published manuscripts. Mixed methods are becoming increasingly prevalent in nursing and healthcare research requiring researchers and consumers to address issues unique to MM such as evaluation of rigor. © 2017 John Wiley & Sons Ltd.

  3. Relating business intelligence and enterprise architecture - A method for combining operational data with architectural metadata

    NARCIS (Netherlands)

    Veneberg, R.K.M.; Iacob, Maria Eugenia; van Sinderen, Marten J.; Bodenstaff, L.

    Combining enterprise architecture and operational data is complex (especially when considering the actual ‘matching’ of data with enterprise architecture elements), and little has been written on how to do this. In this paper we aim to fill this gap, and propose a method to combine operational data

  4. A hybrid approach for efficient anomaly detection using metaheuristic methods

    Directory of Open Access Journals (Sweden)

    Tamer F. Ghanem

    2015-07-01

    Full Text Available Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.

  5. A PRACTICAL APPROACH TO THE GROUND OSCILLATION VELOCITY MEASUREMENT METHOD

    Directory of Open Access Journals (Sweden)

    Siniša Stanković

    2017-01-01

    Full Text Available The use of an explosive’s energy during blasting includes undesired effects on the environment. The seismic influence of a blast, as a major undesired effect, is determined by many national standards, recommendations and calculations where the main parameter is ground oscillation velocity at the field measurement location. There are a few approaches and methods for calculation of expected ground oscillation velocities according to charge weight per delay and the distance from the blast to the point of interest. Utilizations of these methods and formulas do not provide satisfactory results, thus the measured values on diverse distance from the blast field more or less differ from values given by previous calculations. Since blasting works are executed in diverse geological conditions, the aim of this research is the development of a practical and reliable approach which will give a different model for each construction site where blasting works have been or will be executed. The approach is based on a greater number of measuring points in line from the blast field at predetermined distances. This new approach has been compared with other generally used methods and formulas through the use of measurements taken during research along with measurements from several previously executed projects. The results confirmed that the suggested model gives more accurate values.

  6. Why older adults spend time sedentary and break their sedentary behaviour: a mixed methods approach using life-logging equipment

    Directory of Open Access Journals (Sweden)

    Manon L Dontje

    2015-10-01

    It can be concluded that a mixed methods approach, by combining objective data of an activity monitor with contextual information from time-lapse photos and subjective information from people regarding their own behaviour, is an useful method to provide indepth information about (breaking sedentary behaviour in older adults. The results of this study showed that there is a difference in what older adults believe that are reasons for them to remain sedentary or break their sedentary time and what their actual reasons are. A personal story board based on objective measurements of sedentary behaviour can be a useful method to raise awareness and find individual and tailored ways to reduce sedentary behaviour and to increase the number of breaks in sedentary behaviour without much interference in daily routine.

  7. A hybrid pulse combining topology utilizing the combination of modularized avalanche transistor Marx circuits, direct pulse adding, and transmission line transformer.

    Science.gov (United States)

    Li, Jiangtao; Zhao, Zheng; Sun, Yi; Liu, Yuhao; Ren, Ziyuan; He, Jiaxin; Cao, Hui; Zheng, Minjun

    2017-03-01

    Numerous applications driven by pulsed voltage require pulses to be with high amplitude, high repetitive frequency, and narrow width, which could be satisfied by utilizing avalanche transistors. The output improvement is severely limited by power capacities of transistors. Pulse combining is an effective approach to increase the output amplitude while still adopting conventional pulse generating modules. However, there are drawbacks in traditional topologies including the saturation tendency of combining efficiency and waveform oscillation. In this paper, a hybrid pulse combining topology was adopted utilizing the combination of modularized avalanche transistor Marx circuits, direct pulse adding, and transmission line transformer. The factors affecting the combining efficiency were determined including the output time synchronization of Marx circuits, and the quantity and position of magnetic cores. The numbers of the parallel modules and the stages were determined by the output characteristics of each combining method. Experimental results illustrated the ability of generating pulses with 2-14 kV amplitude, 7-11 ns width, and a maximum 10 kHz repetitive rate on a matched 50-300 Ω resistive load. The hybrid topology would be a convinced pulse combining method for similar nanosecond pulse generators based on the solid-state switches.

  8. The separation-combination method of linear structures in remote sensing image interpretation and its application

    International Nuclear Information System (INIS)

    Liu Linqin

    1991-01-01

    The separation-combination method a new kind of analysis method of linear structures in remote sensing image interpretation is introduced taking northwestern Fujian as the example, its practical application is examined. The practice shows that application results not only reflect intensities of linear structures in overall directions at different locations, but also contribute to the zonation of linear structures and display their space distribution laws. Based on analyses of linear structures, it can provide more information concerning remote sensing on studies of regional mineralization laws and the guide to ore-finding combining with mineralization

  9. A combined bottom-up/top-down approach to prepare a sterile injectable nanosuspension.

    Science.gov (United States)

    Hu, Xi; Chen, Xi; Zhang, Ling; Lin, Xia; Zhang, Yu; Tang, Xing; Wang, Yanjiao

    2014-09-10

    To prepare a uniform nanosuspension of strongly hydrophobic riboflavin laurate (RFL) allowing sterile filtration, physical modification (bottom-up) was combined with high-pressure homogenization (top-down) method. Unlike other bottom-up approaches, physical modification with surfactants (TPGS and PL-100) by lyophilization controlled crystallization and compensated for the poor wettability of RFL. On one hand, crystal growth and aggregation during freezing was restricted by a stabilizer-layer adsorbed on the drug surface by hydrophobic interaction. On the other hand, subsequent crystallization of drug in the sublimation process was limited to the interstitial spaces between solvent crystals. After lyophilization, modified drug with a smaller particle size and better wettability was obtained. When adding surfactant solution, water molecules passed between the hydrophilic groups of surface active molecules and activated the polymer chains allowing them to stretch into water. The coarse suspension was crushed into a nanosuspension (MP=162 nm) by high-pressure homogenization. For long term stability, lyophilization was applied again to solidify the nanosuspension (sorbitol as cryoprotectant). A slight crystal growth to about 600 nm was obtained to allow slow release for a sustained effect after muscular administration. Moreover, no paw-licking responses and very slight muscular inflammation demonstrated the excellent biocompatibility of this long-acting RFL injection. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. About One Approach to Determine the Weights of the State Space Method

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2015-01-01

    features of the problem, allowing us to obtain an analytical solution, it should be recognized that this criterion is, essentially, another form of a convolution of individual criteria. The problem to determine the weighting criteria in this quadratic convolution is still relevant and one of the main problems of the method.The author traces the obvious connection between the interactive methods for finding Pareto-optimal solutions of the MOO problem and the classical method for analytical design of optimal controllers (ADOC, in which decision-maker, essentially, specifies the same conditions: assigns the weights of private optimality criteria; imposes restrictions on the values of private optimality criteria; evaluates proposed MOO, using the system of alternatives. An important feature of interactive methods is that during optimization process the decision-makers' preferences may change.The article aims to link both approaches to the MOO problem. The expected advantage is the combination of analytical solutions based on the formulas of ADOC method and overcoming the criteria convolution shortcomings because of the difficult choice of weights. The novelty of the article is that the obvious idea of finding relationships between the weights of criteria by creating an indifference curve (Pareto frontier has been already used for the special type of quadratic integral criteria rather than for non-linear convolution of criteria that have their drawbacks.A modification of the quadratic criterion by breaking it into several components is made. Splitting into two criteria, which allowed us to obtain a graphical interpretation on a plane in the coordinates of the criterion describing the management costs and the criterion of the phase coordinates of the control object turned to be convenient. Since the method is often used for stabilization relative to the reference trajectory, representation of x and u as deviations of the phase coordinates and management costs was visual

  11. Knowledge of users of low-dose oral combined contraceptives about the method

    Directory of Open Access Journals (Sweden)

    Camila Félix Américo

    2013-07-01

    Full Text Available OBJECTIVES: to identify the knowledge of users of combined oral contraceptive about correct use, side effects and complications; to verify the correlation between knowledge about the method with age, education, family income and time of use. METHOD: cross-sectional study performed in Fortaleza, Ceará, Brazil, from March to July 2010, with 294 women. Data were collected through interviews. RESULTS: 75% had substantial knowledge about the proper use and side effects and no knowledge about complications. The higher the educational level and family income, the higher the women's knowledge about the correct use of the method. Positive correlation suggests that women who used the method for longer knew more about its side effects. CONCLUSION: there are knowledge gaps about the method, which are influenced by socioeconomic variables and use time.

  12. Induction of angiogenesis in tissue-engineered scaffolds designed for bone repair: a combined gene therapy-cell transplantation approach.

    Science.gov (United States)

    Jabbarzadeh, Ehsan; Starnes, Trevor; Khan, Yusuf M; Jiang, Tao; Wirtel, Anthony J; Deng, Meng; Lv, Qing; Nair, Lakshmi S; Doty, Steven B; Laurencin, Cato T

    2008-08-12

    One of the fundamental principles underlying tissue engineering approaches is that newly formed tissue must maintain sufficient vascularization to support its growth. Efforts to induce vascular growth into tissue-engineered scaffolds have recently been dedicated to developing novel strategies to deliver specific biological factors that direct the recruitment of endothelial cell (EC) progenitors and their differentiation. The challenge, however, lies in orchestration of the cells, appropriate biological factors, and optimal factor doses. This study reports an approach as a step forward to resolving this dilemma by combining an ex vivo gene transfer strategy and EC transplantation. The utility of this approach was evaluated by using 3D poly(lactide-co-glycolide) (PLAGA) sintered microsphere scaffolds for bone tissue engineering applications. Our goal was achieved by isolation and transfection of adipose-derived stromal cells (ADSCs) with adenovirus encoding the cDNA of VEGF. We demonstrated that the combination of VEGF releasing ADSCs and ECs results in marked vascular growth within PLAGA scaffolds. We thereby delineate the potential of ADSCs to promote vascular growth into biomaterials.

  13. Induction of angiogenesis in tissue-engineered scaffolds designed for bone repair: A combined gene therapy–cell transplantation approach

    Science.gov (United States)

    Jabbarzadeh, Ehsan; Starnes, Trevor; Khan, Yusuf M.; Jiang, Tao; Wirtel, Anthony J.; Deng, Meng; Lv, Qing; Nair, Lakshmi S.; Doty, Steven B.; Laurencin, Cato T.

    2008-01-01

    One of the fundamental principles underlying tissue engineering approaches is that newly formed tissue must maintain sufficient vascularization to support its growth. Efforts to induce vascular growth into tissue-engineered scaffolds have recently been dedicated to developing novel strategies to deliver specific biological factors that direct the recruitment of endothelial cell (EC) progenitors and their differentiation. The challenge, however, lies in orchestration of the cells, appropriate biological factors, and optimal factor doses. This study reports an approach as a step forward to resolving this dilemma by combining an ex vivo gene transfer strategy and EC transplantation. The utility of this approach was evaluated by using 3D poly(lactide-co-glycolide) (PLAGA) sintered microsphere scaffolds for bone tissue engineering applications. Our goal was achieved by isolation and transfection of adipose-derived stromal cells (ADSCs) with adenovirus encoding the cDNA of VEGF. We demonstrated that the combination of VEGF releasing ADSCs and ECs results in marked vascular growth within PLAGA scaffolds. We thereby delineate the potential of ADSCs to promote vascular growth into biomaterials. PMID:18678895

  14. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    Science.gov (United States)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  15. Modelling the Cast Component Weight in Hot Chamber Die Casting using Combined Taguchi and Buckingham's π Approach

    Science.gov (United States)

    Singh, Rupinder

    2018-02-01

    Hot chamber (HC) die casting process is one of the most widely used commercial processes for the casting of low temperature metals and alloys. This process gives near-net shape product with high dimensional accuracy. However in actual field environment the best settings of input parameters is often conflicting as the shape and size of the casting changes and one have to trade off among various output parameters like hardness, dimensional accuracy, casting defects, microstructure etc. So for online inspection of the cast components properties (without affecting the production line) the weight measurement has been established as one of the cost effective method (as the difference in weight of sound and unsound casting reflects the possible casting defects) in field environment. In the present work at first stage the effect of three input process parameters (namely: pressure at 2nd phase in HC die casting; metal pouring temperature and die opening time) has been studied for optimizing the cast component weight `W' as output parameter in form of macro model based upon Taguchi L9 OA. After this Buckingham's π approach has been applied on Taguchi based macro model for the development of micro model. This study highlights the Taguchi-Buckingham based combined approach as a case study (for conversion of macro model into micro model) by identification of optimum levels of input parameters (based on Taguchi approach) and development of mathematical model (based on Buckingham's π approach). Finally developed mathematical model can be used for predicting W in HC die casting process with more flexibility. The results of study highlights second degree polynomial equation for predicting cast component weight in HC die casting and suggest that pressure at 2nd stage is one of the most contributing factors for controlling the casting defect/weight of casting.

  16. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  17. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  18. Mixed Methods Research in School Psychology: A Mixed Methods Investigation of Trends in the Literature

    Science.gov (United States)

    Powell, Heather; Mihalas, Stephanie; Onwuegbuzie, Anthony J.; Suldo, Shannon; Daley, Christine E.

    2008-01-01

    This article illustrates the utility of mixed methods research (i.e., combining quantitative and qualitative techniques) to the field of school psychology. First, the use of mixed methods approaches in school psychology practice is discussed. Second, the mixed methods research process is described in terms of school psychology research. Third, the…

  19. A call for a multifaceted approach to language learning motivation research: Combining complexity, humanistic, and critical perspectives

    Directory of Open Access Journals (Sweden)

    Julian Pigott

    2012-10-01

    Full Text Available In this paper I give an overview of recent developments in the L2 motivation field, in particular the movement away from quantitative, questionnaire-based methodologies toward smaller-scale qualitative studies incorporating concepts from complexity theory. While complexity theory provides useful concepts for exploring motivation in new ways, it has nothing to say about ethics, morality, ideology, politics, power or educational purpose. Furthermore, calls for its use come primarily from researchers from the quantitative tradition whose aim in importing this paradigm from the physical sciences appears to be to conceptualize and model motivation more accurately. The endeavor therefore remains a fundamentally positivist one. Rather than being embraced as a self-contained methodology, I argue that complexity theory should be used cautiously and prudently alongside methods grounded in other philosophical traditions. Possibilities abound, but here I suggest one possible multifaceted approach combining complexity theory, a humanisticconception of motivation, and a critical perspective.

  20. Constructed Wetlands for Combined Sewer Overflow Treatment—Comparison of German, French and Italian Approaches

    Directory of Open Access Journals (Sweden)

    Daniel Meyer

    2012-12-01

    Full Text Available Combined sewer systems are designed to transport stormwater surface run off in addition to the dry weather flows up to defined limits. In most European countries, hydraulic loads greater than the design flow are discharged directly into receiving water bodies, with minimal treatment (screening, sedimentation, or with no treatment at all. One feasible solution to prevent receiving waters from strong negative impacts seems to be the application of vertical flow constructed wetlands. In Germany, first attempts to use this ecological technology were recognized in early 1990s. Since then, further development continued until a high level of treatment performance was reached. During recent years the national “state-of-the-art” (defined in 2005 was adapted in other European countries, including France and Italy. Against the background of differing national requirements in combined sewer system design, substantial developmental steps were taken. The use of coarser filter media in combination with alternating loadings of separated filter beds allows direct feedings with untreated combined runoff. Permanent water storage in deep layers of the wetland improves the system’s robustness against extended dry periods, but contains operational risks. Besides similar functions (but different designs and layouts, correct dimensioning of all approaches suffers from uncertainties in long-term rainfall predictions as well as inside sewer system simulation tools.