Badiani, Anna; Montellato, Lara; Bochicchio, Davide; Anfossi, Paola; Zanardi, Emanuela; Maranesi, Magda
Proximate composition and fatty acid profile, conjugated linoleic acid (CLA) isomers included, were determined in separable lean of raw and cooked lamb rib loins. The cooking methods compared, which were also investigated for cooking yields and true nutrient retention values, were dry heating of fat-on cuts and moist heating of fat-off cuts; the latter method was tested as a sort of dietetic approach against the more traditional former type. With significantly (P cooking losses, dry heating of fat-on rib-loins produced slightly (although only rarely significantly) higher retention values for all of the nutrients considered, including CLA isomers. On the basis of the retention values obtained, both techniques led to a minimum migration of lipids into the separable lean, which was higher (P cooking of the class of CLA isomers (including that of the nutritionally most important isomer cis-9,trans-11) was more similar to that of the monounsaturated than the polyunsaturated fatty acids.
The present invention provides a method capable of evacuating gases at an exhaustion rate which varies depending on the kind of gases. For example, in a thermonuclear experimental device, a hydrogen gas exhaustion rate is determined to 0 and an exhaustion rate for other impure gases is made greater. Namely, a baffle plate is cooled to a temperature to a level at which the vapor pressure of gases to evacuate a baffle plate is required in a pump incorporating a baffle plate, for example, a cryopump or a sorption pump. In this case, the level of the vapor pressure required for evacuating the exhaustion gas ingredients is 1 x 10 -8 Torr or less, preferably, 1 x 10 -9 Torr. In a thermonuclear experimental device, a gas having a lower boiling point next to hydrogen is neon, but neon is scarcely present in natural world. Nitrogen has a lower boiling point next thereto, and if the temperature is lowered to such a level that the vapor pressure for evacuating gases such as nitrogen, and carbon monoxide, oxygen, fluorine, argon or methane having a boiling point at or lower than nitrogen is required. Then, evacuation rate sufficient for gases other than hydrogen gas can be obtained. (I.S.)
new skills and methods supporting the process of evaluating and selecting information systems. This paper presents a method for selecting COTS systems. The method includes the following phases: problem framing, requirements and appraisal, and selection of systems. The idea and distinguishing feature...... behind the method is that improved understanding of organizational' ends' or goals should govern the selection of a COTS system. This can also be expressed as a match or fit between ‘ends' (e.g. improved organizational effectiveness) and ‘means' (e.g. implementing COTS systems). This way of approaching...
First those nondestructive testing methods, which are used in quality assurance, are to be treated, e.g. - ultrasonics - radiography - magnetic particle testing - dye penetrant testing - eddy currents, and their capabilities and limitations are shown. Second the selection of optimal testing methods under the aspect of defect recognition in different materials and components are shown. (orig./RW)
Hedman, Jonas; Andersson, Bo
feature behind the method is that improved understanding of organizational ‘ends’ or goals should govern the selection of a COTS system. This can also be expressed as a match or fit between ‘ends’ (e.g. improved organizational effectiveness) and ‘means’ (e.g. implementing COTS systems). This way...
Weitman, Catheryn J.
Provided a structural selection method is carried out, it is possible to find quality early childhood personnel. The hiring process involves five definite steps, each of which establishes a base for the next. A needs assessment formulating basic minimal qualifications is the first step. The second step involves review of current job descriptions…
Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana
Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.
Van Norman, Staci A.; Aston, Victoria J.; Weimer, Alan W.
Structures, catalysts, and reactors suitable for use for a variety of applications, including gas-to-liquid and coal-to-liquid processes and methods of forming the structures, catalysts, and reactors are disclosed. The catalyst material can be deposited onto an inner wall of a microtubular reactor and/or onto porous tungsten support structures using atomic layer deposition techniques.
Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki
The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)
Jarmer, Hanne Østergaard; Christiansen, Freddy Bugge
Population samples including mother-offspring combinations provide information on the selection components: zygotic selection, sexual selection, gametic seletion and fecundity selection, on the mating pattern, and on the deviation from linkage equilibrium among the loci studied. The theory...
Hatch, Anson V; Sommer, Gregory J; Singh, Anup K; Wang, Ying-Chih; Abhyankar, Vinay V
Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.
Mann, Nicholas R; Tranter, Troy J
Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.
Van Zyl, Lourens H
Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...
Daniels, Michael A.; Condit, Reston A.; Rasmussen, Nikki; Wallace, Ronald S.
Initiation devices may include at least one substrate, an initiation element positioned on a first side of the at least one substrate, and a spark gap electrically coupled to the initiation element and positioned on a second side of the at least one substrate. Initiation devices may include a plurality of substrates where at least one substrate of the plurality of substrates is electrically connected to at least one adjacent substrate of the plurality of substrates with at least one via extending through the at least one substrate. Initiation systems may include such initiation devices. Methods of igniting energetic materials include passing a current through a spark gap formed on at least one substrate of the initiation device, passing the current through at least one via formed through the at least one substrate, and passing the current through an explosive bridge wire of the initiation device.
Fore, C.S.; Faust, R.A.; Brewster, R.H.
This annotated bibliography of 337 references summarizes the literature published on the cleanup and treatment of radioactively contaminated land. Specifically, this bibliography focuses on literature concerned with the methods of cleanup and treatment being applied - chemical, physical, or vegetative stabilization; the types of equipment being used; and the influence of climatic conditions on the method selected for use. The emphasis in such literature is placed on hazardous site cleanup efforts that have been completed as well as those that are in progress and are being planned. Appendix A includes 135 additional references to literature identified but not included in the bibliography because of time and funding constraints. Appendix B consists of a table that identifies the cleanup and treatment research conducted at specific sites. All of the information included in this bibliography is stored in a computerized form that is readily available upon request
Kim, Kilyoo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
Many national projects are stopped since sites for the projects are not determined. The sites selections are hold by NIMBY for unpleasant facilities or by PYMFY for preferable facilities among local governments. The followings are the typical ones; NIMBY projects: high level radioactive waste disposal, THAAD, Nuclear power plant(NPP), etc. PIMFY projects: South-east new airport, KTX station, Research center for NPP decommission, etc. The site selection for high level radioactive waste disposal is more difficult problem, and thus government did not decide and postpone to a dead end street. Since it seems that there is no solution for site selection for high level radioactive waste disposal due to NIMBY among local governments, a solution method is proposed in this paper. To decide a high level radioactive waste disposal, the first step is to invite a bid by suggesting a package deal including PIMFY projects such as Research Center for NPP decommission. Maybe potential host local governments are asked to submit sealed bids indicating the minimum compensation sum that they would accept the high level radioactive waste disposal site. If there are more than one local government put in a bid, then decide an adequate site by considering both the accumulated PESS point and technical evaluation results. By considering how fairly preferable national projects and unpleasant national projects are distributed among local government, sites selection for NIMBY or PIMFY facilities is suggested. For NIMBY national projects, risk, cost benefit analysis is useful and required since it generates cost value to be used in the PESS. For many cases, the suggested method may be not adequate. However, similar one should be prepared, and be basis to decide sites for NIMBY or PIMFY national projects.
The present invention offers a method for separating dry matter from a medium. A separation chamber is at least partly defined by a plurality of rollers (2,7) and is capable of being pressure regulated. At least one of the rollers is a pore roller (7) having a surface with pores allowing permeabi...
Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky
the inventory that contribute significantly to the impact categories on ecotoxicity and human toxicity to focus the characterisation work. The reason why the selection methods are more important for the chemical-related impact categories than for other impact categories is the extremely high number......Characterization of toxic emissions in life cycle impact assessment (LCIA) is in many cases severely limited by the lack of characterization factors for the emissions mapped in the inventory. The number of substances assigned characterization factors for (eco)toxicity included in the dominating LCA....... The methods are evaluated against a set of pre-defined criteria (comprising consistency with characterization and data requirement) and applied to case studies and a test set of chemicals. The reported work is part of the EU-project OMNIITOX....
Bawendi, Moungi G.; Sundar, Vikram C.
Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties.
Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.
This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product
Lai, Zhiping; Huang, Kuo-Wei; Chen, Wei
In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.
In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.
Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky
method(s) in Work package 8 (WP8) of the OMNIITOX project. The selection methods and the other CRS methods are described in detail, a set of evaluation criteria are developed and the methods are evaluated against these criteria. This report (Deliverable 11B (D11B)) gives the results from task 7.1d, 7.1e......This report describes an inventory of Life Cycle Impact Assessment (LCIA) selection methods for assessing toxic releases. It consists of an inventory of current selection methods and other Chemical Ranking and Scoring (CRS) methods assessed to be relevant for the development of (a) new selection...... and 7.1f of WP 7 for selection methods. The other part of D11 (D11A) is reported in another report and deals with characterisation methods. A selection method is a method for prioritising chemical emissions to be included in an LCIA characterisation of toxic releases, i.e. calculating indicator scores...
Bandor, Michael S
... (the ability of the product to meet the need) and the cost. The method used for the analysis and selection activities can range from the use of basic intuition to counting the number of requirements fulfilled, or something...
In this study, Fuzzy TOPSIS method was performed for the selection of open pit truck and the optimal solution of the problem was investigated. Data from Turkish Coal Enterprises was used in the application of the method. This paper explains the Fuzzy TOPSIS approaches with group decision-making application in an open pit coal mine in Turkey. An algorithm of the multi-person multi-criteria decision making with fuzzy set approach was applied an equipment selection problem. It was found that Fuzzy TOPSIS with a group decision making is a method that may help decision-makers in solving different decision-making problems in mining.
Kong, Peter C; Grandy, Jon D; Detering, Brent A; Zuck, Larry D
Electrode assemblies for plasma reactors include a structure or device for constraining an arc endpoint to a selected area or region on an electrode. In some embodiments, the structure or device may comprise one or more insulating members covering a portion of an electrode. In additional embodiments, the structure or device may provide a magnetic field configured to control a location of an arc endpoint on the electrode. Plasma generating modules, apparatus, and systems include such electrode assemblies. Methods for generating a plasma include covering at least a portion of a surface of an electrode with an electrically insulating member to constrain a location of an arc endpoint on the electrode. Additional methods for generating a plasma include generating a magnetic field to constrain a location of an arc endpoint on an electrode.
Gunia, M; Phocas, F; Gourdine, J-L; Bijma, P; Mandonnet, N
The Creole goat is a local breed used for meat production in Guadeloupe (French West Indies). As in other tropical countries, improvement of parasite resistance is needed. In this study, we compared predicted selection responses for alternative breeding programs with or without parasite resistance and resilience traits. The overall breeding goal included traits for production, reproduction, and parasite resilience and resistance to ensure a balanced selection outcome. The production traits were BW and dressing percentage (DP). The reproduction trait was fertility (FER), which was the number of doe kiddings per mating. The resistance trait was worm fecal egg count (FEC), which is a measurement of the number of gastro-intestinal parasite eggs found in the feces. The resilience trait was the packed cell volume (PCV), which is a measurement of the volume of red blood cells in the blood. Dressing percentage, BW, and FEC were measured at 11 mo of age, which is the mating or selling age. Fertility and PCV were measured on females at each kidding period. The breeding program accounting for the overall breeding goal and a selection index including all traits gave annual selection responses of 800 g for BW, 3.75% for FER, 0.08% for DP, -0.005 ln(eggs/g) for FEC, and 0.28% for PCV. The expected selection responses for BW and DP in this breeding program were reduced by 2% and 6%, respectively, compared with a breeding program not accounting for FEC and PCV. The overall breeding program, proposed for the Creole breed, offers the best breeding strategy in terms of expected selection responses, making it possible to improve all traits together. It offers a good balance between production and adaptation traits and may present some interest for the selection of other goat breeds in the tropics.
Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.
Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.
Full Text Available Loss of biodiversity is one of the most severe threats to sustainability, and land use and land use changes are still the single most important factor. Still, there is no sign of any consensus on how to include impacts on biodiversity from land use and land use changes in LCIA. In this paper, different characteristics of biodiversity are discussed and related to proposals on how to include land use and land use changes in LCIA. We identify the question of why we should care about biodiversity as a key question, since different motivations will result in different choices for the indicators, and we call for more openness in the motivation for indicator selection. We find a promising trend in combining pressure indicators with geographic weighting and regard this as a promising way ahead. More knowledge on the consequences of different choices, such as the selection of a reference state, is still needed.
de Boer, L.; Labro, Eva; Morlacchi, Pierangela
this paper we present a review of decision methods reported in the literature for supporting the supplier selection process. The review is based on an extensive search in the academic literature. We position the contributions in a framework that takes the diversity of procurement situations in terms
Walker, Brian J.; Dorn, August; Bulovic, Vladimir; Bawendi, Moungi G.
A photoelectric device, such as a photodetector, can include a semiconductor nanowire electrostatically associated with a J-aggregate. The J-aggregate can facilitate absorption of a desired wavelength of light, and the semiconductor nanowire can facilitate charge transport. The color of light detected by the device can be chosen by selecting a J-aggregate with a corresponding peak absorption wavelength.
Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia
Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.
Karande, Prasad; Chakraborty, Shankar
Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove
Zhang, Yue; Yang, Cheng; Wang, Lei
This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.
Lorenz, G.; Schaefer, B.; Balzat, W.
The method for hydrometallurgical recovery of selected metals refers to ore dressing by means of milling and alkaline leaching of metals, preferably uranium. By adding CaO during wet milling, Na + or K + ions of clayey ores are replaced by Ca 2+ ions. Due to the ion exchange processes, the uranium bonded with clays becomes more accessible to the leaching solution. The uranium yield increases and the consumption of reagents decreases
Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke
This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.
van der Harst, Eugenie; Potting, José; Kroeze, Carolien
Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of recycling loops, the recycled-content method, and the equal-share method. These six methods were first compared, with an assumed hypothetical 100% recycling rate, for an aluminium can and a disposable polystyrene (PS) cup. The substitution and recycled-content method were next applied with actual rates for recycling, incineration and landfilling for both product systems in selected countries. The six methods differ in their approaches to credit recycling. The three substitution methods stimulate the recyclability of the product and assign credits for the obtained recycled material. The choice to either apply a correction factor, or to account for alternative substituted material has a considerable influence on the LCA results, and is debatable. Nevertheless, we prefer incorporating quality reduction of the recycled material by either a correction factor or an alternative substituted material over simply ignoring quality loss. The allocation-on-number-of-recycling-loops method focusses on the life expectancy of material itself, rather than on a specific separate product. The recycled-content method stimulates the use of recycled material, i.e. credits the use of recycled material in products and ignores the recyclability of the products. The equal-share method is a compromise between the substitution methods and the recycled-content method. The results for the aluminium can follow the underlying philosophies of the methods. The results for the PS cup are additionally influenced by the correction factor or credits for the alternative material accounting for the drop in PS quality, the waste treatment management (recycling rate, incineration rate, landfilling rate), and the
There is little research on inclusion of children with selective mutism in school/kindergarten. Moreover, few studies have tried to understand selectively mute children's interactions in the natural surroundings of their home and school/kindergarten. Five children meeting the DSM-IV criteria for selective mutism were video-observed in social…
Full Text Available Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs. The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both
Field, Richard V., Jr.
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
Hanks, Brantley R.; Skelton, Robert E.
Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.
Full Text Available Recent years witness rapid expansion of applications of proteomics to clinical research including non-malignant lung disorders. These developments bring along the need for standardisation of proteomic experiments. This paper briefly reviews basic methodical aspects of appliedproteomic studies using SELDI-TOF mass spectrometry platform as example but also emphasizes general aspects of quality assurance in proteomics. Key-words: lung proteome, quality assurance, SELDI-TOF MS
Full Text Available Background: The role of air transport in the economic development of a country and its regions cannot be overestimated. The decision concerning an airport's location must be in line with the expectations of all the stakeholders involved. This article deals with the issues related to the choice of sites where airports should be located. Methods: Two main quantitative approaches related to the issue of airport location are presented in this article, i.e. the question of optimizing such a choice and the issue of selecting the location from a predefined set. The former involves mathematical programming and formulating the problem as an optimization task, the latter, however, involves ranking the possible variations. Due to various methodological backgrounds, the authors present the advantages and disadvantages of both approaches and point to the one which currently has its own practical application. Results: Based on real-life examples, the authors present a multi-stage procedure, which renders it possible to solve the problem of airport location. Conclusions: Based on the overview of literature of the subject, the authors point to three types of approach to the issue of airport location which could enable further development of currently applied methods.
Holmberg, Andreas; Kierkegaard, Axel; Weng, Chenyang
In this paper, a method for including damping of acoustic energy in regions of strong turbulence is derived for a linearized Navier-Stokes method in the frequency domain. The proposed method is validated and analyzed in 2D only, although the formulation is fully presented in 3D. The result is applied in a study of the linear interaction between the acoustic and the hydrodynamic field in a 2D T-junction, subject to grazing flow at Mach 0.1. Part of the acoustic energy at the upstream edge of the junction is shed as harmonically oscillating disturbances, which are conveyed across the shear layer over the junction, where they interact with the acoustic field. As the acoustic waves travel in regions of strong shear, there is a need to include the interaction between the background turbulence and the acoustic field. For this purpose, the oscillation of the background turbulence Reynold's stress, due to the acoustic field, is modeled using an eddy Newtonian model assumption. The time averaged flow is first solved for using RANS along with a k-ε turbulence model. The spatially varying turbulent eddy viscosity is then added to the spatially invariant kinematic viscosity in the acoustic set of equations. The response of the 2D T-junction to an incident acoustic field is analyzed via a plane wave scattering matrix model, and the result is compared to experimental data for a T-junction of rectangular ducts. A strong improvement in the agreement between calculation and experimental data is found when the modification proposed in this paper is implemented. Discrepancies remaining are likely due to inaccuracies in the selected turbulence model, which is known to produce large errors e.g. for flows with significant rotation, which the grazing flow across the T-junction certainly is. A natural next step is therefore to test the proposed methodology together with more sophisticated turbulence models.
Srinivas, Nuggehally R
The development of sound bioanalytical method(s) is of paramount importance during the process of drug discovery and development culminating in a marketing approval. Although the bioanalytical procedure(s) originally developed during the discovery stage may not necessarily be fit to support the drug development scenario, they may be suitably modified and validated, as deemed necessary. Several reviews have appeared over the years describing analytical approaches including various techniques, detection systems, automation tools that are available for an effective separation, enhanced selectivity and sensitivity for quantitation of many analytes. The intention of this review is to cover various key areas where analytical method development becomes necessary during different stages of drug discovery research and development process. The key areas covered in this article with relevant case studies include: (a) simultaneous assay for parent compound and metabolites that are purported to display pharmacological activity; (b) bioanalytical procedures for determination of multiple drugs in combating a disease; (c) analytical measurement of chirality aspects in the pharmacokinetics, metabolism and biotransformation investigations; (d) drug monitoring for therapeutic benefits and/or occupational hazard; (e) analysis of drugs from complex and/or less frequently used matrices; (f) analytical determination during in vitro experiments (metabolism and permeability related) and in situ intestinal perfusion experiments; (g) determination of a major metabolite as a surrogate for the parent molecule; (h) analytical approaches for universal determination of CYP450 probe substrates and metabolites; (i) analytical applicability to prodrug evaluations-simultaneous determination of prodrug, parent and metabolites; (j) quantitative determination of parent compound and/or phase II metabolite(s) via direct or indirect approaches; (k) applicability in analysis of multiple compounds in select
Lillo, Thomas M.; Chu, Henry S.; Harrison, William M.; Bailey, Derek
Methods of forming composite materials include coating particles of titanium dioxide with a substance including boron (e.g., boron carbide) and a substance including carbon, and reacting the titanium dioxide with the substance including boron and the substance including carbon to form titanium diboride. The methods may be used to form ceramic composite bodies and materials, such as, for example, a ceramic composite body or material including silicon carbide and titanium diboride. Such bodies and materials may be used as armor bodies and armor materials. Such methods may include forming a green body and sintering the green body to a desirable final density. Green bodies formed in accordance with such methods may include particles comprising titanium dioxide and a coating at least partially covering exterior surfaces thereof, the coating comprising a substance including boron (e.g., boron carbide) and a substance including carbon.
McDuffie, H.F.; Smith, D.H.; Owen, P.T.
This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles
McDuffie, H.F.; Smith, D.H.; Owen, P.T.
This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.
Jeronen, Eila; Palmberg, Irmeli; Yli-Panula, Eija
There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education…
Gunia, M.; Phocas, F.; Gourdine, J.L.; Bijma, P.; Mandonnet, N.
The Creole goat is a local breed used for meat production in Guadeloupe (French West Indies). As in other tropical countries, improvement of parasite resistance is needed. In this study, we compared predicted selection responses for alternative breeding programs with or without parasites resistance
Bakr, Osman; Peng, Wei; Wang, Lingfei
Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making
Irfianti Asti Dwi
Full Text Available Genetic Algoirthm is one alternative solution in the field of modeling optimization, automatic programming and machine learning. The purpose of the study was to compare some type of selection methods in Genetic Algorithm for land suitability. Contribution of this research applies the best method to develop region based horticultural commodities. This testing is done by comparing the three methods on the method of selection, the Roulette Wheel, Tournament Selection and Stochastic Universal Sampling. Parameters of the locations used in the test scenarios include Temperature = 27°C, Rainfall = 1200 mm, hummidity = 30%, Cluster fruit = 4, Crossover Probabiitiy (Pc = 0.6, Mutation Probabilty (Pm = 0.2 and Epoch = 10. The second test epoch incluides location parameters consist of Temperature = 30°C, Rainfall = 2000 mm, Humidity = 35%, Cluster fruit = 5, Crossover Probability (Pc = 0.7, Mutation Probability (Pm = 0.3 and Epoch 10. The conclusion of this study shows that the Roulette Wheel is the best method because it produces more stable and fitness value than the other two methods.
Balabin, Roman M.; Smirnov, Sergey V.
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic
Full Text Available There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education in several scientific databases. The article provides an overview of 24 selected articles published in peer-reviewed scientific journals from 2006–2016. The data was analyzed using qualitative content analysis. Altogether, 16 journals were selected and 24 articles were analyzed in detail. The foci of the analyses were teaching methods, learning environments, knowledge and thinking skills, psychomotor skills, emotions and attitudes, and evaluation methods. Additionally, features of good methods were investigated and their implications for teaching were emphasized. In total, 22 different teaching methods were found to improve sustainability education in different ways. The most emphasized teaching methods were those in which students worked in groups and participated actively in learning processes. Research points toward the value of teaching methods that provide a good introduction and supportive guidelines and include active participation and interactivity.
Harst-Wintraecken, van der Eugenie; Potting, José; Kroeze, Carolien
Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of
Juell-Skielse, Gustaf; Nilsson, Anders G.; Nordqvist, Andreas; Westergren, Mattias
Conventional on-premise installations of ERP are now rapidly being replaced by ERP as service. Although ERP becomes more accessible and no longer requires local infrastructure, current selection methods do not take full advantage of the provided agility. In this paper we present AMES (Agile Method for ERP Selection), a novel method for ERP selection which better utilizes the strengths of service oriented ERP. AMES is designed to shorten lead time for selection, support identification of essen...
de Boer, L.; van der Wegen, Leonardus L.M.; Telgen, Jan
Initial purchasing decisions such as make-or-buy decisions and supplier selection are decisions of strategic importance to companies. The nature of these decisions usually is complex and unstructured. Management Science techniques might be helpful tools for this kind of decision making problems. So
Full Text Available This article presents data on the anthropogenic air emissions of selected substances (CO2, SO2, total suspended particles (TSP, dioxins and furans (PCDD/F, Pb and Cd subject to reporting under the Climate Convention (UNFCCC or the Convention on Long-range Transboundary Air Pollution (UNECE CLRTAP. It also presents the national emissions of these substances in 2014 by the major source categories and defines the share of metal production in these emissions. Analysis is based on national emission inventory reports. Most important source of air emission in case of CO2 and SO2 is 1.A.1 Energy industries category. TSP and PCDD/F are emitted mainly from fuel combustion in small sources (i.a. households. Emission of heavy metals (Pb and Cd is connected mostly with 1.A.2. Manufacturing industries and construction category. Metallurgy is significant source of emission only for lead and cadmium from among all considered substances. The shares of particular sectors in the national emissions of given pollutants are important, in view of the possible reduction measures and the determination in which industries they could bring about tangible results.
Hofmann, Douglas C. (Inventor); Kennett, Andrew (Inventor)
Systems and methods to fabricate objects including metallic glass-based materials using low-pressure casting techniques are described. In one embodiment, a method of fabricating an object that includes a metallic glass-based material includes: introducing molten alloy into a mold cavity defined by a mold using a low enough pressure such that the molten alloy does not conform to features of the mold cavity that are smaller than 100 microns; and cooling the molten alloy such that it solidifies, the solid including a metallic glass-based material.
Levin-Decanini, Tal; Connolly, Sucheta D; Simpson, David; Suarez, Liza; Jacob, Suma
Elucidating differences in social-behavioral profiles of children with comorbid presentations, utilizing caregiver as well as teacher reports, will refine our understanding of how contextual symptoms vary across anxiety-related disorders. In our pediatric anxiety clinic, the most frequent diagnoses and comorbidities were mixed anxiety (MA; ≥ 1 anxiety disorder; N = 155), anxiety with comorbid attention-deficit hyperactivity disorder (MA/ADHD, N = 47) and selective mutism (SM, N = 48). Behavioral measures (CPRS, CTRS) were analyzed using multiple one-way multivariate analyses of covariance tests. Differences between the three diagnostic groups were examined using completed parent and teacher reports (N = 135, 46, and 48 for MA, MA/ADHD, and SM groups, respectively). Comparisons across the MA, MA/ADHD, and SM groups indicate a significant multivariate main effect of group for caregiver and teacher responses (P < 0.01). Caregivers reported that children with SM are similar in profile to those with MA, and both groups were significantly different from the MA/ADHD group. Teachers reported that children with SM had more problems with social behaviors than with the MA or MA/ADHD groups. Further comparison indicates a significant main effect of group (P < 0.001), such that children with SM have the greatest differences in behavior observed by teachers versus caregivers. Clinical profiles between MA/ADHD, MA, and SM groups varied, illustrating the importance of multi-rater assessment scales to capture subtle distinctions and to inform treatment planning given that comorbidities occur frequently in children who present with anxiety. © 2013 Wiley Periodicals, Inc.
Westelaken, van de, Thijs; Terwee, Bas; Ravesteijn, Pascal
In recent years business process management (BPM) and specifically information systems that support the analysis, design and execution of processes (also called business process management systems (BPMS)) are getting more attention. This has lead to an increase in research on BPM and BPMS. However the research on BPMS is mostly focused on the architecture of the system and how to implement such systems. How to select a BPM system that fits the strategy and goals of a specific organization is ...
Kivotides, Demosthenes, E-mail: firstname.lastname@example.org
An asymptotically exact method for the direct computation of turbulent polymeric liquids that includes (a) fully resolved, creeping microflow fields due to hydrodynamic interactions between chains, (b) exact account of (subfilter) residual stresses, (c) polymer Brownian motion, and (d) direct calculation of chain entanglements, is formulated. Although developed in the context of polymeric fluids, the method is equally applicable to turbulent colloidal dispersions and aerosols. - Highlights: • An asymptotically exact method for the computation of polymer and colloidal fluids is developed. • The method is valid for all flow inertia and all polymer volume fractions. • The method models entanglements and hydrodynamic interactions between polymer chains.
Garbowska, M; Berthold-Pluta, A; Stasiak-Różańska, L
The cultivation of spices and herbs in parts of the world characterized by warm climate and high humidity provides excellent conditions for the development of microorganisms, including the undesirable ones. The aim of this study was to determine the microbiological quality of spices and herbs available on the Polish market, considering the occurrence of Cronobacter species bacteria. Analyses covered 60 samples of commercial spices and herbs, including 38 samples of dried herbs (basil, bay leaves, thyme, oregano, tarragon, marjoram, dill, parsley, rosemary, lovage) and 16 samples of seasoning blends as well as 6 samples of spices seeds and fruits (pimento, black pepper, coriander). All samples were tested for the total count of aerobic mesophilic bacteria (TAMB) and for the presence of Cronobacter spp. In most of the samples of spices and herbs (60.0%), the TAMB did not exceed 10(4) CFU/g, and the level regarded as unacceptable (>10(6) CFU/g) was not identified in any of the samples. The presence of Cronobacter spp. was demonstrated in 10 (16.7%) samples of the analyzed products, however these were mainly samples of herbs (basil, tarragon, parsley) and one sample of a seasoning blend (Provence herbs). The highest microbiological contamination (TAMB) was found in samples of herbs (oregano, tarragon, basil) and in ready seasoning blends, in 21.1% and 25.0% of which the total count of aerobic mesophiles was in the range of 10(5)-10(6) CFU/g. In all samples of spices seeds and fruits (coriander, black pepper and pimento), the total count of aerobic bacteria reached spices and herbs available on the Polish market. The study demonstrated also that dried spices and herbs may be carriers of Cronobacter species bacteria, though their presence in not often detected in products of this type. Copyright © 2015 Elsevier Ltd. All rights reserved.
Paixao, S.B.; Marzo, M.A.S.; Alvim, A.C.M.
The calculation method used in WIGLE code is studied. Because of the non availability of such a praiseworthy solution, expounding the method minutely has been tried. This developed method has been applied for the solution of the one-dimensional, two-group, diffusion equations in slab, axial analysis, including non-boiling heat transfer, accountig for feedback. A steady-state program (CITER-1D), written in FORTRAN 4, has been implemented, providing excellent results, ratifying the developed work quality. (Author) [pt
Bakr, Osman M.
Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making organometallic halide perovskite monocrystalline film, and the like.
Widyas, Nuzul; Jensen, Just; Nielsen, Vivi Hunnicke
Selection experiment was performed for weight gain in 13 generations of outbred mice. A total of 18 lines were included in the experiment. Nine lines were allotted to each of the two treatment diets (19.3 and 5.1 % protein). Within each diet three lines were selected upwards, three lines were...... selected downwards and three lines were kept as controls. Bayesian statistical methods are used to estimate the genetic variance components. Mixed model analysis is modified including mutation effect following the methods by Wray (1990). DIC was used to compare the model. Models including mutation effect...... have better fit compared to the model with only additive effect. Mutation as direct effect contributes 3.18% of the total phenotypic variance. While in the model with interactions between additive and mutation, it contributes 1.43% as direct effect and 1.36% as interaction effect of the total variance...
Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K
Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effect of cooking methods on the micronutrient profile of selected vegetables: okra fruit ( Abelmoshcus esculentus ), fluted pumpkin ( Telfairia occidentalis ), African spinach ( Amarantus viridis ), and scent leaf ( Ocumum gratissimum.
Dan Florin NITOI
Full Text Available Paper presents technological methods for metal deposition, costs calculation and clasification for the main process that helps in automotive technologies to repair or to increase pieces properties. Paper was constructed based on many technological experiments that starts from practicans and returns to them. The main aim is to help young engineers or practicians engineers to choose the proper reconditioning process with the best information in repairing pieces from automotive industry.
DeWall, Kevin George [Pocatello, ID; Garcia, Humberto Enrique [Idaho Falls, ID; McKellar, Michael George [Idaho Falls, ID
Methods of evaluating a fluid condition may include stroking a valve member and measuring a force acting on the valve member during the stroke. Methods of evaluating a fluid condition may include measuring a force acting on a valve member in the presence of fluid flow over a period of time and evaluating at least one of the frequency of changes in the measured force over the period of time and the magnitude of the changes in the measured force over the period of time to identify the presence of an anomaly in a fluid flow and, optionally, its estimated location. Methods of evaluating a valve condition may include directing a fluid flow through a valve while stroking a valve member, measuring a force acting on the valve member during the stroke, and comparing the measured force to a reference force. Valve assemblies and related systems are also disclosed.
Nakaya, Akihiro; Isobe, Sachiko N
Genomic selection or genome-wide selection (GS) has been highlighted as a new approach for marker-assisted selection (MAS) in recent years. GS is a form of MAS that selects favourable individuals based on genomic estimated breeding values. Previous studies have suggested the utility of GS, especially for capturing small-effect quantitative trait loci, but GS has not become a popular methodology in the field of plant breeding, possibly because there is insufficient information available on GS for practical use. In this review, GS is discussed from a practical breeding viewpoint. Statistical approaches employed in GS are briefly described, before the recent progress in GS studies is surveyed. GS practices in plant breeding are then reviewed before future prospects are discussed. Statistical concepts used in GS are discussed with genetic models and variance decomposition, heritability, breeding value and linear model. Recent progress in GS studies is reviewed with a focus on empirical studies. For the practice of GS in plant breeding, several specific points are discussed including linkage disequilibrium, feature of populations and genotyped markers and breeding scheme. Currently, GS is not perfect, but it is a potent, attractive and valuable approach for plant breeding. This method will be integrated into many practical breeding programmes in the near future with further advances and the maturing of its theory.
Yoon, GeunSeok; Park, Sang-Min; Yang, Heuiwon; Tsang, Daniel C W; Alessi, Daniel S; Baek, Kitae
During the measurement of total organic carbon (TOC), dissolved organic carbon is converted into CO 2 by using high temperature combustion (HTC) or wet chemical oxidation (WCO). However, the criteria for selecting the oxidation methods are not clear. In this study, the chemical structures of organic material were considered as a key factor to select the oxidation method used. Most non-degradable organic compounds showed a similar oxidation efficiency in both methods, including natural organic compounds, dyes, and pharmaceuticals, and thus both methods are appropriate to measure TOC in waters containing these compounds. However, only a fraction of the carbon in the halogenated compounds (perfluorooctanoic acid and trifluoroacetic acid) were oxidized using WCO, resulting in measured TOC values that are considerably lower than those determined by HTC. This result is likely due to the electronegativity of halogen elements which inhibits the approach of electron-rich sulfate radicals in the WCO, and the higher bond strength of carbon-halogen pairs as compared to carbon-hydrogen bonds, which results in a lower degree of oxidation of the compounds. Our results indicate that WCO could be used to oxidize most organic compounds, but may not be appropriate to quantify TOC in organic carbon pools that contain certain halogenated compounds. Copyright © 2018 Elsevier Ltd. All rights reserved.
Background Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. Methods We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Results Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Conclusions Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research. PMID:22545681
Meier, M.; Yadigaroglu, G. [Swiss Federal Institute of Technology, Nuclear Engineering Lab. ETH-Zentrum, CLT, Zurich (Switzerland); Smith, B. [Paul Scherrer Inst. (PSI), Villigen (Switzerland). Lab. for Thermal-Hydraulics
Various versions of Volume-of-Fluid (VOF) methods have been used successfully for the numerical simulation of gas-liquid flows with an explicit tracking of the phase interface. Of these, Piecewise-Linear Interface Construction (PLIC-VOF) appears as a fairly accurate, although somewhat more involved variant. Including effects due to surface tension remains a problem, however. The most prominent methods, Continuum Surface Force (CSF) of Brackbill et al. and the method of Zaleski and co-workers (both referenced later), both induce spurious or 'parasitic' currents, and only moderate accuracy in regards to determining the curvature. We present here a new method to determine curvature accurately using an estimator function, which is tuned with a least-squares-fit against reference data. Furthermore, we show how spurious currents may be drastically reduced using the reconstructed interfaces from the PLIC-VOF method. (authors)
Mortaza Honarmande Azimi
Full Text Available Selection of robots from the several proposed alternatives is a very important and tedious task. Decision makers are not limited to one method and several methods have been proposed for solving this problem. This study presents Polygons Area Method (PAM as a multi attribute decision making method for robot selection problem. In this method, the maximum polygons area obtained from the attributes of an alternative robot on the radar chart is introduced as a decision-making criterion. The results of this method are compared with other typical multiple attribute decision-making methods (SAW, WPM, TOPSIS, and VIKOR by giving two examples. To find similarity in ranking given by different methods, Spearman’s rank correlation coefficients are obtained for different pairs of MADM methods. It was observed that the introduced method is in good agreement with other well-known MADM methods in the robot selection problem.
Full Text Available Systemic sclerosis is a chronic connective tissue disease characterized by microvascular abnormalities, immune disturbances and progressive fibrosis of the skin and internal organs. Skin involvement may result in contractures, leading to marked loss of hand mobility, adversely affecting the performance of daily activities and decreasing the quality of life. Face involvement not only causes functional loss, but also lowers the self-esteem of patients. Increasing attention has recently been focused on the need to rehabilitate patients with systemic sclerosis in order to prevent the development of joint contractures and loss of mobility. The study presents a review of the current literature on rehabilitation possibilities in patients with systemic sclerosis, with a special focus on physiotherapy methods.
Rollins, Harry W [Idaho Falls, ID; Petkovic, Lucia M [Idaho Falls, ID; Ginosar, Daniel M [Idaho Falls, ID
Catalytic structures include a catalytic material disposed within a zeolite material. The catalytic material may be capable of catalyzing a formation of methanol from carbon monoxide and/or carbon dioxide, and the zeolite material may be capable of catalyzing a formation of hydrocarbon molecules from methanol. The catalytic material may include copper and zinc oxide. The zeolite material may include a first plurality of pores substantially defined by a crystal structure of the zeolite material and a second plurality of pores dispersed throughout the zeolite material. Systems for synthesizing hydrocarbon molecules also include catalytic structures. Methods for synthesizing hydrocarbon molecules include contacting hydrogen and at least one of carbon monoxide and carbon dioxide with such catalytic structures. Catalytic structures are fabricated by forming a zeolite material at least partially around a template structure, removing the template structure, and introducing a catalytic material into the zeolite material.
Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun
This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.
Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen
Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.
Hammond, Scott Ryan; Olson, Dana C.; van Hest, Marinus Franciscus Antonius Maria
Methods for producing thin film charge selective transport layers are provided. In one embodiment, a method for forming a thin film charge selective transport layer comprises: providing a precursor solution comprising a metal containing reactive precursor material dissolved into a complexing solvent; depositing the precursor solution onto a surface of a substrate to form a film; and forming a charge selective transport layer on the substrate by annealing the film.
Mougaard, J.F.; Poulsen, P.N.; Nielsen, L.O.
the crack geometry parameters, such as the crack length and the crack direction directly in the virtual work formulation. For efficiency, it is essential to obtain a complete tangent stiffness. A new method in this work is presented to include an incremental form the crack growth parameters on equal terms......The eXtended Finite Element Method (XFEM) is a useful tool for modeling the growth of discrete cracks in structures made of concrete and other quasi‐brittle and brittle materials. However, in a standard application of XFEM, the tangent stiffness is not complete. This is a result of not including...... with the degrees of freedom in the FEM‐equations. The complete tangential stiffness matrix is based on the virtual work together with the constitutive conditions at the crack tip. Introducing the crack growth parameters as direct unknowns, both equilibrium equations and the crack tip criterion can be handled...
some occasions, performance is terminated early; this can occur due to either mutual agreement or a breach of contract by one of the parties (Garrett...Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection December 2015 Capt Jacques Lamoureux, USAF...on the contract management process, with special emphasis on the source selection methods of tradeoff and lowest price technically acceptable (LPTA
Full Text Available Antenna selection has been regarded as an effective method to acquire the diversity benefits of multiple antennas while potentially reduce hardware costs. This paper focuses on receive antenna selection. According to the proportion between the numbers of total receive antennas and selected antennas and the influence of each antenna on system capacity, we propose a fast adaptive antenna selection algorithm for wireless multiple-input multiple-output (MIMO systems. Mathematical analysis and numerical results show that our algorithm significantly reduces the computational complexity and memory requirement and achieves considerable system capacity gain compared with the optimal selection technique in the same time.
Chopra, A.K.; Guttierrez, J.A.
A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure method eliminates the deconvolution calculations and the related assumption -regarding type and direction of earthquake waves- required in the direct method. The substructure method is computationally efficient because the two substructures-the structure and the soil region- are analyzed separately; and, more important, it permits taking advantage of the important feature that response to earthquake ground motion is essentially contained in the lower few natural modes of vibration of the structure on fixed base. For sites where essentially similar soils extend to large depths and there is no obvious rigid boundary such as a soil-rock interface, numerical results for earthquake response of a nuclear reactor structure are presented to demonstrate that the commonly used finite element method may lead to unacceptable errors; but the substructure method leads to reliable results
Kim, Man Cheol
pages of fault trees which should be redrawn from the logical relation between the components in the DPPS. On the other hand, the RGGG model for DPPS can be drawn in only I page, and the structure of the model is almost similar to the actual structure of DPPS. In addition, the RGGG model visually shows the state of information processed by each component. In this sense, I believe that the RGGG method is more intuitive and easy to use. Quantitative analysis of the fault tree model and the RGGG model shows that the two models produce equivalent results. Currently, an identified disadvantage is the calculation time, since a lot of approximation algorithms are already developed for the fault tree analysis, but not for the RGGG method. As a new method for HRA, I develop a quantitative situation assessment model for human operators, since human performance is mainly affected by the situation assessment. In contrast to the conventional HRA methods which are mostly developed by expert opinions, the proposed situation assessment model for human operators is developed on the basis of mathematical theories, Bayesian inference and the information theory, with the following two assumptions. 1. Human operators can do Bayesian inference, even though the results cannot be as accurate as mathematical calculations. 2. In knowledge-driven monitoring, the probability that human operators select an indicator as the next indicator to monitor is proportional to the expected information from the indicator. (The expected information from each indicator can be calculated using the information theory.) With an experiment, it is shown that the two assumptions are reasonable. The proposed mathematical model for the situation assessment of human operators is expected to be used as the basis for the development of the quantitative model for the situation assessment of actual human operators. By combining the RGGG method and the mathematical model for the situation assessment of human operators, I
Jasson, V.; Jacxsens, L.; Luning, P.A.; Rajkovic, A.; Uyttendaele, M.
This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant
Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.
A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…
Kowalski, B R [Univ. of Washington, Seattle; Bender, C F
A new method of preprocessing spectral data for extraction of molecular structural information is desired. This SELECT method generates orthogonal features that are important for classification purposes and that also retain their identity to the original measurements. A brief introduction to chemical pattern recognition is presented. A brief description of the method and an application to mass spectral data analysis follow. (BLM)
Park, Jeong Yoon; Kim, Kyung Hyun; Kuh, Sung Uk; Chin, Dong Kyu; Kim, Keun Su; Cho, Yong Eun
Surgeon spine angle during surgery was studied ergonomically and the kinematics of the surgeon's spine was related with musculoskeletal fatigue and pain. Spine angles varied depending on operation table height and visualization method, and in a previous paper we showed that the use of a loupe and a table height at the midpoint between the umbilicus and the sternum are optimal for reducing musculoskeletal loading. However, no studies have previously included a microscope as a possible visualization method. The objective of this study is to assess differences in surgeon spine angles depending on operating table height and visualization method, including microscope. We enrolled 18 experienced spine surgeons for this study, who each performed a discectomy using a spine surgery simulator. Three different methods were used to visualize the surgical field (naked eye, loupe, microscope) and three different operating table heights (anterior superior iliac spine, umbilicus, the midpoint between the umbilicus and the sternum) were studied. Whole spine angles were compared for three different views during the discectomy simulation: midline, ipsilateral, and contralateral. A 16-camera optoelectronic motion analysis system was used, and 16 markers were placed from the head to the pelvis. Lumbar lordosis, thoracic kyphosis, cervical lordosis, and occipital angle were compared between the different operating table heights and visualization methods as well as a natural standing position. Whole spine angles differed significantly depending on visualization method. All parameters were closer to natural standing values when discectomy was performed with a microscope, and there were no differences between the naked eye and the loupe. Whole spine angles were also found to differ from the natural standing position depending on operating table height, and became closer to natural standing position values as the operating table height increased, independent of the visualization method
Chopra, A.K.; Guttierrez, J.A.
A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure eliminates the deconvolution calculations and the related assumption-regarding type and direction of earthquake waves-required in the direct method. (Auth.)
Hessell, Steven M.; Morris, Robert L.; McGrogan, Sean W.; Heap, Anthony H.; Mendoza, Gil J.
A powertrain including an engine and torque machines is configured to transfer torque through a multi-mode transmission to an output member. A method for controlling the powertrain includes employing a closed-loop speed control system to control torque commands for the torque machines in response to a desired input speed. Upon approaching a power limit of a power storage device transferring power to the torque machines, power limited torque commands are determined for the torque machines in response to the power limit and the closed-loop speed control system is employed to determine an engine torque command in response to the desired input speed and the power limited torque commands for the torque machines.
A new screening method for selection of desired recombinant plasmids in molecular cloning. ... African Journal of Biotechnology ... Regarding the facts of this study, after digestion process, the products directly were subjected to ligation. Due to ...
Strano, Michael S. (Inventor); Usrey, Monica (Inventor); Barone, Paul (Inventor); Dyke, Christopher A. (Inventor); Tour, James M. (Inventor); Kittrell, W. Carter (Inventor); Hauge, Robert H (Inventor); Smalley, Richard E. (Inventor); Marek, legal representative, Irene Marie (Inventor)
The present invention is directed toward methods of selectively functionalizing carbon nanotubes of a specific type or range of types, based on their electronic properties, using diazonium chemistry. The present invention is also directed toward methods of separating carbon nanotubes into populations of specific types or range(s) of types via selective functionalization and electrophoresis, and also to the novel compositions generated by such separations.
Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.
Michiel C. Zijp
Full Text Available Sustainability assessments can play an important role in decision making. This role starts with selecting appropriate methods for a given situation. We observed that scientists, consultants, and decision-makers often do not systematically perform a problem analyses that guides the choice of the method, partly related to a lack of systematic, though sufficiently versatile approaches to do so. Therefore, we developed and propose a new step towards method selection on the basis of question articulation: the Sustainability Assessment Identification Key. The identification key was designed to lead its user through all important choices needed for comprehensive question articulation. Subsequently, methods that fit the resulting specific questions are suggested by the key. The key consists of five domains, of which three determine method selection and two the design or use of the method. Each domain consists of four or more criteria that need specification. For example in the domain “system boundaries”, amongst others, the spatial and temporal scales are specified. The key was tested (retrospectively on a set of thirty case studies. Using the key appeared to contribute to improved: (i transparency in the link between the question and method selection; (ii consistency between questions asked and answers provided; and (iii internal consistency in methodological design. There is latitude to develop the current initial key further, not only for selecting methods pertinent to a problem definition, but also as a principle for associated opportunities such as stakeholder identification.
Tokuyasu, Yoshiki; Kusakabe, Kiyoko; Yamazaki, Toshio
Electrocardiography (ECG), echocardiography, nuclear method, cardiac catheterization, left ventriculography and endomyocardial biopsy (biopsy) were performed in 40 cases of cardiomyopathy (CM), 9 of endocardial fibroelastosis and 19 of specific heart muscle disease, and the usefulness and limitation of each method was comparatively estimated. In CM, various methods including biopsy were performed. The 40 patients were classified into 3 groups, i.e., hypertrophic (17), dilated (20) and non-hypertrophic.non-dilated (3) on the basis of left ventricular ejection fraction and hypertrophy of the ventricular wall. The hypertrophic group was divided into 4 subgroups: 9 septal, 4 apical, 2 posterior and 2 anterior. The nuclear study is useful in assessing the site of the abnormal ventricular thickening, perfusion defect and ventricular function. Echocardiography is most useful in detecting asymmetric septal hypertrophy. The biopsy gives the sole diagnostic clue, especially in non-hypertrophic.non-dilated cardiomyopathy. ECG is useful in all cases but correlation with the site of disproportional hypertrophy was not obtained. (J.P.N.)
Isotalo, A.E.; Wieselquist, W.A.
Highlights: • A method for handling external feed in depletion calculations with CRAM. • Source term can have polynomial or exponentially decaying time-dependence. • CRAM with source term and adjoint capability implemented to ORIGEN in SCALE. • The new solver is faster and more accurate than the original solver of ORIGEN. - Abstract: A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Furthermore, in most cases, the new solver is up to several times faster due to not requiring similar substepping as the original one
Full Text Available Free vibration equations for non-cylindrical (conical, barrel, and hyperboloidal types helical springs with noncircular cross-sections, which consist of 14 first-order ordinary differential equations with variable coefficients, are theoretically derived using spatially curved beam theory. In the formulation, the warping effect upon natural frequencies and vibrating mode shapes is first studied in addition to including the rotary inertia, the shear and axial deformation influences. The natural frequencies of the springs are determined by the use of improved Riccati transfer matrix method. The element transfer matrix used in the solution is calculated using the Scaling and Squaring method and Pad'e approximations. Three examples are presented for three types of springs with different cross-sectional shapes under clamped-clamped boundary condition. The accuracy of the proposed method has been compared with the FEM results using three-dimensional solid elements (Solid 45 in ANSYS code. Numerical results reveal that the warping effect is more pronounced in the case of non-cylindrical helical springs than that of cylindrical helical springs, which should be taken into consideration in the free vibration analysis of such springs.
Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija
Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.
Full Text Available Supplier selection is always found to be a complex decision-making problem in manufacturing environment. The presence of several independent and conflicting evaluation criteria, either qualitative or quantitative, makes the supplier selection problem a candidate to be solved by multi-criteria decision-making (MCDM methods. Even several MCDM methods have already been proposed for solving the supplier selection problems, the need for an efficient method that can deal with qualitative judgments related to supplier selection still persists. In this paper, the applicability and usefulness of measuring attractiveness by a categorical-based evaluation technique (MACBETH is demonstrated to act as a decision support tool while solving two real time supplier selection problems having qualitative performance measures. The ability of MACBETH method to quantify the qualitative performance measures helps to provide a numerical judgment scale for ranking the alternative suppliers and selecting the best one. The results obtained from MACBETH method exactly corroborate with those derived by the past researchers employing different mathematical approaches.
Zhu, J; Liang, J; Chen, S; Qin, A; Yan, D [Beaumont Health Systeml, Royal Oak, MI (United States)
Purpose: Organ changes shape and size during radiation treatment due to both mechanical stress and radiation dose response. However, the dose response induced deformation has not been considered in conventional deformable image registration (DIR). A novel DIR approach is proposed to include both tissue elasticity and radiation dose induced organ deformation. Methods: Assuming that organ sub-volume shrinkage was proportional to the radiation dose induced cell killing/absorption, the dose induced organ volume change was simulated applying virtual temperature on each sub-volume. Hence, both stress and heterogeneity temperature induced organ deformation. Thermal stress finite element method with organ surface boundary condition was used to solve deformation. Initial boundary correspondence on organ surface was created from conventional DIR. Boundary condition was updated by an iterative optimization scheme to minimize elastic deformation energy. The registration was validated on a numerical phantom. Treatment dose was constructed applying both the conventional DIR and the proposed method using daily CBCT image obtained from HN treatment. Results: Phantom study showed 2.7% maximal discrepancy with respect to the actual displacement. Compared with conventional DIR, subvolume displacement difference in a right parotid had the mean±SD (Min, Max) to be 1.1±0.9(−0.4∼4.8), −0.1±0.9(−2.9∼2.4) and −0.1±0.9(−3.4∼1.9)mm in RL/PA/SI directions respectively. Mean parotid dose and V30 constructed including the dose response induced shrinkage were 6.3% and 12.0% higher than those from the conventional DIR. Conclusion: Heterogeneous dose distribution in normal organ causes non-uniform sub-volume shrinkage. Sub-volume in high dose region has a larger shrinkage than the one in low dose region, therefore causing more sub-volumes to move into the high dose area during the treatment course. This leads to an unfavorable dose-volume relationship for the normal organ
Full Text Available Mercury is commonly found in coal and the coal utilization processes constitute one of the main sources of mercury emission to the environment. This issue is particularly important for Poland, because the Polish energy production sector is based on brown and hard coal. The forecasts show that this trend in energy production will continue in the coming years. At the time of the emission limits introduction, methods of reducing the mercury emission will have to be implemented in Poland. Mercury emission can be reduced as a result of using coal with a relatively low mercury content. In the case of the absence of such coals, the methods of mercury removal from coal can be implemented. The currently used and developing methods include the coal cleaning process (both the coal washing and the dry deshaling as well as the thermal pretreatment of coal (mild pyrolysis. The effectiveness of these methods various for different coals, which is caused by the diversity of coal origin, various characteristics of coal and, especially, by the various modes of mercury occurrence in coal. It should be mentioned that the coal cleaning process allows for the removal of mercury occurring in mineral matter, mainly in pyrite. The thermal pretreatment of coal allows for the removal of mercury occurring in organic matter as well as in the inorganic constituents characterized by a low temperature of mercury release. In this paper, the guidelines for the selection of mercury removal method from hard coal were presented. The guidelines were developed taking into consideration: the effectiveness of mercury removal from coal in the process of coal cleaning and thermal pretreatment, the synergy effect resulting from the combination of these processes, the direction of coal utilization as well as the influence of these processes on coal properties.
Cowgill, Joel [White Lake, MI
An exhaust valve control method may include displacing an exhaust valve in communication with the combustion chamber of an engine to an open position using a hydraulic exhaust valve actuation system and returning the exhaust valve to a closed position using the hydraulic exhaust valve actuation assembly. During closing, the exhaust valve may be displaced for a first duration from the open position to an intermediate closing position at a first velocity by operating the hydraulic exhaust valve actuation assembly in a first mode. The exhaust valve may be displaced for a second duration greater than the first duration from the intermediate closing position to a fully closed position at a second velocity at least eighty percent less than the first velocity by operating the hydraulic exhaust valve actuation assembly in a second mode.
Blizzard, John; Tonge, James Steven; Weidner, William Kenneth
A flexible barrier film has a thickness of from greater than zero to less than 5,000 nanometers and a water vapor transmission rate of no more than 1.times.10.sup.-2 g/m.sup.2/day at 22.degree. C. and 47% relative humidity. The flexible barrier film is formed from a composition, which comprises a multi-functional acrylate. The composition further comprises the reaction product of an alkoxy-functional organometallic compound and an alkoxy-functional organosilicon compound. A method of forming the flexible barrier film includes the steps of disposing the composition on a substrate and curing the composition to form the flexible barrier film. The flexible barrier film may be utilized in organic electronic devices.
Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky
selection methods. Conclusion and Recommendations. For the development of SMs it is recommended that the general principles for CRS systems as applied to SMs are taken into account. Furthermore, special attention should be paid to some specific issues, i.e. the emitted amount should be included, data......Goal, Scope and Background. The aim of this study has been to come up with recommendations on how to develop a selection method (SM) within the method development research of the OMNIITOX project. An SM is a method for prioritization of chemical emissions to be included in a Life Cycle Impact...... categories, and when they do there are typically many gaps. This study covers the only existing methods explicitly designed as SMs (EDIP-selection, Priofactor and CPM-selection), the dominating Chemical Ranking and Scoring (CRS) method in Europe (EURAM) and in USA (WMPT) that can be adapted for this purpose...
Taenaka, Yuzo; Kashihara, Shigeru; Tsukamoto, Kazuya; Yamaguchi, Suguru; Oie, Yuji
In the near future, wireless local area networks (WLANs) will overlap to provide continuous coverage over a wide area. In such ubiquitous WLANs, a mobile node (MN) moving freely between multiple access points (APs) requires not only permanent access to the Internet but also continuous communication quality during handover. In order to satisfy these requirements, an MN needs to (1) select an AP with better performance and (2) execute a handover seamlessly. To satisfy requirement (2), we proposed a seamless handover method in a previous study. Moreover, in order to achieve (1), the Received Signal Strength Indicator (RSSI) is usually employed to measure wireless link quality in a WLAN system. However, in a real environment, especially if APs are densely situated, it is difficult to always select an AP with better performance based on only the RSSI. This is because the RSSI alone cannot detect the degradation of communication quality due to radio interference. Moreover, it is important that AP selection is completed only on an MN, because we can assume that, in ubiquitous WLANs, various organizations or operators will manage APs. Hence, we cannot modify the APs for AP selection. To overcome these difficulties, in the present paper, we propose and implement a proactive AP selection method considering wireless link condition based on the number of frame retransmissions in addition to the RSSI. In the evaluation, we show that the proposed AP selection method can appropriately select an AP with good wireless link quality, i.e., high RSSI and low radio interference.
The focus of this study was to explore and describe influences on decision making related to infant feeding methods in the context of HIV and AIDS. Study objectives were: (1) to explore and describe the influences on decision making related to infant feeding methods selected by the mother during the antenatal period and ...
Mariani, Robert Dominick
Zirconium-based metal alloy compositions comprise zirconium, a first additive in which the permeability of hydrogen decreases with increasing temperatures at least over a temperature range extending from 350.degree. C. to 750.degree. C., and a second additive having a solubility in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. At least one of a solubility of the first additive in the second additive over the temperature range extending from 350.degree. C. to 750.degree. C. and a solubility of the second additive in the first additive over the temperature range extending from 350.degree. C. to 750.degree. C. is higher than the solubility of the second additive in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. Nuclear fuel rods include a cladding material comprising such metal alloy compositions, and nuclear reactors include such fuel rods. Methods are used to fabricate such zirconium-based metal alloy compositions.
Chaharsooghi, S K; Ashrafi, Mehdi
Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.
Li Wang; Xilin Hou; Lili Zhang
The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...
García-Fernández, Pablo; Wojdeł, Jacek C.; Íñiguez, Jorge; Junquera, Javier
We present a first-principles-based (second-principles) scheme that permits large-scale materials simulations including both atomic and electronic degrees of freedom on the same footing. The method is based on a predictive quantum-mechanical theory—e.g., density functional theory—and its accuracy can be systematically improved at a very modest computational cost. Our approach is based on dividing the electron density of the system into a reference part—typically corresponding to the system's neutral, geometry-dependent ground state—and a deformation part—defined as the difference between the actual and reference densities. We then take advantage of the fact that the bulk part of the system's energy depends on the reference density alone; this part can be efficiently and accurately described by a force field, thus avoiding explicit consideration of the electrons. Then, the effects associated to the difference density can be treated perturbatively with good precision by working in a suitably chosen Wannier function basis. Further, the electronic model can be restricted to the bands of interest. All these features combined yield a very flexible and computationally very efficient scheme. Here we present the basic formulation of this approach, as well as a practical strategy to compute model parameters for realistic materials. We illustrate the accuracy and scope of the proposed method with two case studies, namely, the relative stability of various spin arrangements in NiO (featuring complex magnetic interactions in a strongly-correlated oxide) and the formation of a two-dimensional electron gas at the interface between band insulators LaAlO3 and SrTiO3 (featuring subtle electron-lattice couplings and screening effects). We conclude by discussing ways to overcome the limitations of the present approach (most notably, the assumption of a fixed bonding topology), as well as its many envisioned possibilities and future extensions.
Full Text Available The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537, 1.2101 (95% CI (1.2084, 1.2119, 3.0234 (95% CI (2.6063, 3.4881, and 1.9018 (95% CI (1.8565, 1.9478, the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958, 3916 (95% CI (3865, 3967, 9886 (95% CI (9740, 10031, and 12633 (95% CI (12515, 12750 for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic.
Full Text Available A project manager has a major influence on the success or failure of the project. A good project manager can match between the strategy and objectives of the organization and the goals of the project. Therefore, the selection of the appropriate project manager is a key factor for the success of the project. A potential project manager is judged by his or her proven performance and personal qualifications. This paper proposes a method to calculate the weighted scores and the full rank of candidates for managing a project, and to select the best of those candidates. The proposed method combines specific methodologies: the Data Envelopment Analysis (DEA and the Analytical Hierarchical Process (AHP and uses DEA Ranking Methods to enhance selection.
Hanks, Brantley R.; Skelton, Robert E.
This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.
Ross, J. C.
The ability of a lower order panel method VSAERO, to accurately predict the lift and pitching moment of a complete forward-swept-wing/canard configuration was investigated. The program can simulate nonlinear effects including boundary-layer displacement thickness, wake roll up, and to a limited extent, separated wakes. The predictions were compared with experimental data obtained using a small-scale model in the 7- by 10- Foot Wind Tunnel at NASA Ames Research Center. For the particular configuration under investigation, wake roll up had only a small effect on the force and moment predictions. The effect of the displacement thickness modeling was to reduce the lift curve slope slightly, thus bringing the predicted lift into good agreement with the measured value. Pitching moment predictions were also improved by the boundary-layer simulation. The separation modeling was found to be sensitive to user inputs, but appears to give a reasonable representation of a separated wake. In general, the nonlinear capabilities of the code were found to improve the agreement with experimental data. The usefullness of the code would be enhanced by improving the reliability of the separated wake modeling and by the addition of a leading edge separation model.
Zhang, Qieqie; Zhao, Long
The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%
Chu, Henry Shiu-Hung [Idaho Falls, ID; Lillo, Thomas Martin [Idaho Falls, ID
The invention includes methods of forming an aluminum oxynitride-comprising body. For example, a mixture is formed which comprises A:B:C in a respective molar ratio in the range of 9:3.6-6.2:0.1-1.1, where "A" is Al.sub.2O.sub.3, "B" is AlN, and "C" is a total of one or more of B.sub.2O.sub.3, SiO.sub.2, Si--Al--O--N, and TiO.sub.2. The mixture is sintered at a temperature of at least 1,600.degree. C. at a pressure of no greater than 500 psia effective to form an aluminum oxynitride-comprising body which is at least internally transparent and has at least 99% maximum theoretical density.
Full Text Available Schizophrenia (SZ is one of the most cryptic and costly mental disorders in terms of human suffering and societal expenditure (van Os and Kapur, 2009. Though strong evidences for functional, structural and genetic abnormalities associated with this disease exist, there is yet no replicable finding which has proven accurate enough to be useful in clinical decision making (Fornito et al., 2009, and its diagnosis relies primarily upon symptom assessment (Williams et al., 2010a. It is likely in part that the lack of consistent neuroimaging findings is because most models favor only one data type or do not combine data from different imaging modalities effectively, thus missing potentially important differences which are only partially detected by each modality (Calhoun et al., 2006a. It is becoming increasingly clear that multi-modal fusion, a technique which takes advantage of the fact that each modality provides a limited view of the brain/gene and may uncover hidden relationships, is an important tool to help unravel the black box of schizophrenia. In this review paper, we survey a number of multimodal fusion applications which enable us to study the schizophrenia macro-connectome, including brain functional, structural and genetic aspects and may help us understand the disorder in a more comprehensive and integrated manner. We also provide a table that characterizes these applications by the methods used and compare these methods in detail, especially for multivariate models, which may serve as a valuable reference that helps readers select an appropriate method based on a given research.
Lievens, Filip; Sackett, Paul R
Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Full Text Available Material selection is a complex problem in the design and development of products for diverse engineering applications. This paper presents a fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation method based on trapezoidal fuzzy interval numbers that can be applied to the selection of materials for an automotive instrument panel. Also, it presents uniqueness in making a significant contribution to the literature in terms of the application of fuzzy decision-making approach to material selection problems. The method is illustrated, validated, and compared against three different fuzzy MCDM methods (fuzzy VIKOR, fuzzy TOPSIS, and fuzzy ELECTRE in terms of its ranking performance. Also, the relationships between the compared methods and the proposed scenarios for fuzzy PROMETHEE are evaluated via the Spearman’s correlation coefficient. Styrene Maleic Anhydride and Polypropylene are determined optionally as suitable materials for the automotive instrument panel case. We propose a generic fuzzy MCDM methodology that can be practically implemented to material selection problem. The main advantages of the methodology are consideration of the vagueness, uncertainty, and fuzziness to decision making environment.
Ali Reza Afshari
Full Text Available Personnel evaluation and selection is a very important activity for the enterprises. Different job needs different ability and the requirement of criteria which can measure ability is different. It needs a suitable and flexible method to evaluate the performance of each candidate according to different requirements of different jobs in relation to each criterion. Analytic Hierarchy Process (AHP is one of Multi Criteria decision making methods derived from paired comparisons. Simple Additive Weighting (SAW is most frequently used multi attribute decision technique. The method is based on the weighted average. It successfully models the ambiguity and imprecision associated with the pair wise comparison process and reduces the personal biasness. This study tries to analyze the Analytic Hierarchy Process in order to make the recruitment process more reasonable, based on the fuzzy multiple criteria decision making model to achieve the goal of personnel selection. Finally, an example is implemented to demonstrate the practicability of the proposed method.
Full Text Available We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.
Gani, Rafiqul; Jiménez-González, Concepción; Constable, David J.C.
is to produce, for a given reaction, a short list of chemicals that could be considered as potential solvents, to evaluate their performance in the reacting system, and, based on this, to rank them according to a scoring system. Several examples of application are given to illustrate the main features and steps......A method to select appropriate green solvents for the promotion of a class of organic reactions has been developed. The method combines knowledge from industrial practice and physical insights with computer-aided property estimation tools for selection/design of solvents. In particular, it employs...... estimates of thermodynamic properties to generate a knowledge base of reaction, solvent and environment related properties that directly or indirectly influence the rate and/or conversion of a given reaction. Solvents are selected using a rules-based procedure where the estimated reaction-solvent properties...
Souvenir, Richard; Buhler, Jeremy; Stormo, Gary; Zhang, Weixiong
Single-nucleotide polymorphism (SNP) genotyping is an important molecular genetics process, which can produce results that will be useful in the medical field. Because of inherent complexities in DNA manipulation and analysis, many different methods have been proposed for a standard assay. One of the proposed techniques for performing SNP genotyping requires amplifying regions of DNA surrounding a large number of SNP loci. To automate a portion of this particular method, it is necessary to select a set of primers for the experiment. Selecting these primers can be formulated as the Multiple Degenerate Primer Design (MDPD) problem. The Multiple, Iterative Primer Selector (MIPS) is an iterative beam-search algorithm for MDPD. Theoretical and experimental analyses show that this algorithm performs well compared with the limits of degenerate primer design. Furthermore, MIPS outperforms an existing algorithm that was designed for a related degenerate primer selection problem.
Sistiaga, Manu; Herrmann, Bent; Larsen, R.B.
was repeated throughout the eight cases in this investigation. When using the paired-gear method, the distribution of the estimated L50 and SR is wider; the distribution of the estimated split parameter has a higher variability than the true split; the estimated mean L50 and SR can be biased; the estimated...... recommend that the methodology used to obtain selectivity estimates using the paired-gear method be reviewed....
Andersen, Lars; Andersen, Søren; Damkilde, Lars
The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....
Büchler, Ralph; Andonov, Sreten; Bienefeld, Kaspar
Here we cover a wide range of methods currently in use and recommended in modern queen rearing, selection and breeding. The recommendations are meant to equally serve as standards for both scientific and practical beekeeping purposes. The basic conditions and different management techniques for q...
Michalcová, A.; Vojtěch, D.; Kubatík, Tomáš František; Stehlíková, K.; Brabec, F.; Marek, I.
Roč. 128, č. 4 (2015), s. 640-642 ISSN 0587-4246. [International Symposium on Physics of Materials (ISPMA) /13./. Prague, 31.08.2014-04.09.2014] Institutional support: RVO:61389021 Keywords : Iron nanoparticles * selective leaching method Subject RIV: JK - Corrosion ; Surface Treatment of Materials Impact factor: 0.525, year: 2015
Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.
The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of
Harkness, Shelly Sheats; Brass, Amy
Mathematics methods textbooks/texts are important components of many courses for preservice teachers. Researchers should explore how these texts are selected and used. Within this paper we report the findings of a survey administered electronically to 132 members of the Association of Mathematics Teacher Educators (AMTE) in order to answer the…
Balusa, Bhanu Chander; Singam, Jayanthu
The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.
Cutanda Henriquez, Vicente; Juhl, Peter Møller
The implementation of viscous and thermal losses using the Boundary Element Method (BEM) is based on the Kirchhoff’s dispersion relation and has been tested in previous work using analytical test cases and comparison with measurements. Numerical methods that can simulate sound fields in fluids...
Ignatiev, E.A.; Romanyukha, A.A.; Koshta, A.A.; Wieser, A.
The method of selective saturation is based on the difference in the microwave (mw) power dependence of the background and radiation induced EPR components of the tooth enamel spectrum. The subtraction of the EPR spectrum recorded at low mw power from that recorded at higher mw power provides a considerable reduction of the background component in the spectrum. The resolution of the EPR spectrum could be improved 10-fold, however simultaneously the signal-to-noise ratio was found to be reduced twice. A detailed comparative study of reference samples with known absorbed doses was performed to demonstrate the advantage of the method. The application of the selective saturation method for EPR dosimetry with tooth enamel reduced the lower limit of EPR dosimetry to about 100 mGy. (author)
The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Full Text Available Supplier selection is one of the most important decisions made in supply chain management. Supplier evaluation problem has been in the center of supply chain researcher’s attention in these years. Managers regard some of these studies and methods inappropriate due to simple, weight scoring methods that generally are based on subjective opinions and judgments of decision maker units involved in the supplier evaluation process yielding imprecise and even unreliable results. This paper seeks to propose a methodology to integrate data envelopment analysis (DEA and group analytical hierarchy process (GAHP for evaluating and selecting the most efficient supplier. We develop a methodology, which consists of 6 steps, one by one has been introduced in lecture and finally applicability of proposed method is indicated by assessing 12 suppliers in a numerical example.
Stasiulevicius, R.; Kastner, G.F.
Thermal neutrons are important analytical tools for microscopic material probe. These neutrons can be selected by diffraction technique using monocrystal, usually artificial. A crystal selection process was implemented and the characteristics of natural specimens were studied by activation analysis-k 0 method. The representative 120 samples, of which 21 best types, were irradiated in IPR-R1 and measured with a neutron diffractometer at IEA-R1m Brazilian reactors. These results are useful for database build up and ease the choice of appropriate natural crystal, with some advantage options: highest intensity diffracted, enlarging the energy operational interval and optimal performance in special applications. (author)
Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya
Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.
Stoia, Lucas John; Melton, Patrick Benedict; Johnson, Thomas Edward; Stevenson, Christian Xavier; Vanselow, John Drake; Westmoreland, James Harold
A turbomachine combustor nozzle includes a monolithic nozzle component having a plate element and a plurality of nozzle elements. Each of the plurality of nozzle elements includes a first end extending from the plate element to a second end. The plate element and plurality of nozzle elements are formed as a unitary component. A plate member is joined with the nozzle component. The plate member includes an outer edge that defines first and second surfaces and a plurality of openings extending between the first and second surfaces. The plurality of openings are configured and disposed to register with and receive the second end of corresponding ones of the plurality of nozzle elements.
Lewandowski, E.F.; Peterson, L.L.
This invention teaches a method of cutting a narrow slot in an extrusion die with an electrical discharge machine by first drilling spaced holes at the ends of where the slot will be, whereby the oil can flow through the holes and slot to flush the material eroded away as the slot is being cut. The invention further teaches a method of extruding a very thin ribbon of solid highly reactive material such as lithium or sodium through the die in an inert atmosphere of nitrogen, argon or the like as in a glovebox. The invention further teaches a method of stamping out sample discs from the ribbon and of packaging each disc by sandwiching it between two aluminum sheets and cold welding the sheets together along an annular seam beyond the outer periphery of the disc. This provides a sample of high purity reactive material that can have a long shelf life
Microarray technology has enriched the study of gene expression in such a way that scientists are now able to measure the expression levels of thousands of genes in a single experiment. Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification, interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This thesis aims on a comparative study of state-of-the-art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k- nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t- statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used for this study. Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in
Rohov Heorhii K.
Full Text Available The article describes results of the author’s study of factors of making strategic decisions on selection of methods of stock allocation by public joint stock companies in Ukraine. The author used the Random forest mathematical apparatus of classification trees building and also informal methods. The article analyses the reasons that restrain public allocation of stock. It shows significant influence upon selection of a method of stock allocation of such factors as capital concentration, balance rate of corporate rights, sector of economy and significant participation of the institutes of common investment or the state in the authorised capital. The built hierarchical model of classification of factors of the issuing policy of joint stock companies finds logical justification in specific features of the institutional environment, however, it does not fit into the framework of the classical concept of the market economy. The model could be used both for formation of goals of corporate financial strategies and in the process of improvement of state regulation of activity of securities issuers. The prospect of further studies in this direction is identification of transformation of factors of selection of the stock allocation method under conditions of revival of the stock market.
Full Text Available Brain decoding with functional magnetic resonance imaging (fMRI requires analysis of complex, multivariate data. Multivoxel pattern analysis (MVPA has been widely used in recent years. MVPA treats the activation of multiple voxels from fMRI data as a pattern and decodes brain states using pattern classification methods. Feature selection is a critical procedure of MVPA because it decides which features will be included in the classification analysis of fMRI data, thereby improving the performance of the classifier. Features can be selected by limiting the analysis to specific anatomical regions or by computing univariate (voxel-wise or multivariate statistics. However, these methods either discard some informative features or select features with redundant information. This paper introduces the principal feature analysis as a novel multivariate feature selection method for fMRI data processing. This multivariate approach aims to remove features with redundant information, thereby selecting fewer features, while retaining the most information.
Murtha, D. Michael
A selection of technical reports, journal articles and books on various aspects of systematic methods for school planning and design, are presented in this bibliography. The subject areas include the design process in terms of--(1) practice, (2) theory, (3) methods, (4) decision systems, and (5) computer applications. Criteria for design with…
Good daylight conditions in office buildings have become an important issue due to new European regulatory demands which include energy consumption for electrical lighting in the building energy frame. Good daylight conditions in offices are thus in increased focus as an energy conserving measure....... In order to evaluate whether a certain design is good daylight design or not building designers must perform detailed evaluation of daylight levels, including the daylight performance of dynamic solar shadings, and include these in the energy performance evaluation. However, the mandatory national...... calculation tool in Denmark (Be06) for evaluating the energy performance of buildings is currently using a simple representation of available daylight in a room and simple assumptions regarding the control of shading devices. In a case example, this is leading to an overestimation of the energy consumption...
Wooten, Rachel; Quinn, John; Macek, Joseph
Landau level mixing should influence the quantum Hall effect for all except the strongest applied magnetic fields. We propose a simple method for examining the effects of Landau level mixing by incorporating multiple Landau levels into the Haldane pseudopotentials through exact numerical diagonalization. Some of the resulting pseudopotentials for the lowest and first excited Landau levels will be presented
Yanagihara, Kota; Kubo, Shin; Dodin, Ilya; Nakamura, Hiroaki; Tsujimura, Toru
Geometrical Optics Ray-tracing is a reasonable numerical analytic approach for describing the Electron Cyclotron resonance Wave (ECW) in slowly varying spatially inhomogeneous plasma. It is well known that the result with this conventional method is adequate in most cases. However, in the case of Helical fusion plasma which has complicated magnetic structure, strong magnetic shear with a large scale length of density can cause a mode coupling of waves outside the last closed flux surface, and complicated absorption structure requires a strong focused wave for ECH. Since conventional Ray Equations to describe ECW do not have any terms to describe the diffraction, polarization and wave decay effects, we can not describe accurately a mode coupling of waves, strong focus waves, behavior of waves in inhomogeneous absorption region and so on. For fundamental solution of these problems, we consider the extension of the Ray-tracing method. Specific process is planned as follows. First, calculate the reference ray by conventional method, and define the local ray-base coordinate system along the reference ray. Then, calculate the evolution of the distributions of amplitude and phase on ray-base coordinate step by step. The progress of our extended method will be presented.
Trögl, J.; Pavlorková, Jana; Packová, P.; Seják, J.; Kuráň, P.; Kuráň, J.; Popelka, J.; Pacina, J.
Roč. 8, č. 3 (2016), č. článku 253. ISSN 2071-1050 Institutional support: RVO:67985858 Keywords : biotope assessment * biotope valuation method * soil microbial communities Subject RIV: DJ - Water Pollution ; Quality Impact factor: 1.789, year: 2016
Rasim M. Alguliyev
Full Text Available Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method.
Alptekin, Orkun; Alptekin, Nesrin
Selection of the most suitable contractor is an important process in public construction projects. This process is a major decision which may influence the progress and success of a construction project. Improper selection of contractors may lead to problems such as bad quality of work and delay in project duration. Especially in the construction projects of public buildings, the proper choice of contractor is beneficial to the public institution. Public procurement processes have different characteristics in respect to dissimilarities in political, social and economic features of every country. In Turkey, Turkish Public Procurement Law PPL 4734 is the main regulatory law for the procurement of the public buildings. According to the PPL 4734, public construction administrators have to contract with the lowest bidder who has the minimum requirements according to the criteria in prequalification process. Public administrators are not sufficient for selection of the proper contractor because of the restrictive provisions of the PPL 4734. The lowest bid method does not enable public construction administrators to select the most qualified contractor and they have realised the fact that the selection of a contractor based on lowest bid alone is inadequate and may lead to the failure of the project in terms of time delay Eand poor quality standards. In order to evaluate the overall efficiency of a project, it is necessary to identify selection criteria. This study aims to focus on identify importance of other criteria besides lowest bid criterion in contractor selection process of PPL 4734. In this study, a survey was conducted to staff of Department of Construction Works of Eskisehir Osmangazi University. According to TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution) for analysis results, termination of construction work in previous tenders is the most important criterion of 12 determined criteria. The lowest bid criterion is ranked in rank 5.
Tomczuk, Zygmunt; Olszanski, Theodore W.; Battles, James E.
A negative electrode that includes a lithium alloy as active material is prepared by briefly submerging a porous, electrically conductive substrate within a melt of the alloy. Prior to solidification, excess melt can be removed by vibrating or otherwise manipulating the filled substrate to expose interstitial surfaces. Electrodes of such as solid lithium-aluminum filled within a substrate of metal foam are provided.
Xiao, Jie; Lu, Dongping; Liu, Jun; Zhang, Jiguang; Graff, Gordon L.
Electrodes having nanostructure and/or utilizing nanoparticles of active materials and having high mass loadings of the active materials can be made to be physically robust and free of cracks and pinholes. The electrodes include nanoparticles having electroactive material, which nanoparticles are aggregated with carbon into larger secondary particles. The secondary particles can be bound with a binder to form the electrode.
Ikeda, Hozumi; Kishimoto, Kenji; Shimonishi, Yoshihiro; Ohmura, Masahiro; Kosakai, Kazuhisa; Ochi, Hironobu
Selection of imaging condition in scintigraphy was evaluated using analytic hierarchy process. First, a method of the selection was led by determining at the points of image quantity and imaging time. Influence of image quality was thought to depend on changes of system resolution, count density, image size, and image density. Also influence of imaging time was thought to depend on changes of system sensitivity and data acquisition time. Phantom study was done for paired comparison of these selection factors, and relations of sample data and the factors, that is Rollo phantom images were taken by changing count density, image size, and image density. Image quality was shown by calculating the score of visual evaluation that done by comparing of a pair of images in clearer cold lesion on the scintigrams. Imaging time was shown by relative values for changes of count density. However, system resolution and system sensitivity were constant in this study. Next, using these values analytic hierarchy process was adapted for this selection of imaging conditions. We conclude that this selection of imaging conditions can be analyzed quantitatively using analytic hierarchy process and this analysis develops theoretical consideration of imaging technique. (author)
Mohammad Panahi Borujeni
Full Text Available The increasing complexity surrounding decision-making situations has made it inevitable for practitioners to apply ideas from a group of experts or decision makers (DMs instead of individuals. In a large proportion of recent studies, not enough attention has been paid to considering uncertainty in practical ways. In this paper, a hesitant fuzzy preference selection index (HFPSI method is proposed based on a new soft computing approach with risk preferences of DMs to deal with imprecise multi-criteria decision-making problems. Meanwhile, qualitative assessing criteria are considered in the process of the proposed method to help the DMs by providing suitable expressions of membership degrees for an element under a set. Moreover, the best alternative is selected based on considering the concepts of preference relation and hesitant fuzzy sets, simultaneously. Therefore, DMs' weights are determined according to the proposed hesitant fuzzy compromise solution technique to prevent judgment errors. Moreover, the proposed method has been extended based on the last aggregation method by aggregating the DMs' opinions during the last stage to avoid data loss. In this respect, a real case study about the mining contractor selection problem is provided to represent the effectiveness and efficiency of the proposed HFPSI method in practice. Then, a comparative analysis is performed to show the feasibility of the presented approach. Finally, sensitivity analysis is carried out to show the effect of considering the DMs' weights and last aggregation approach in a dispersion of the alternatives’ ranking values.
Futures studies is often conflated with science fiction or pop-futurism. Consequently there is a need for demarcation of what is futures studies and what is not. From the same reason the essay stresses the need for quality control to focus on futures research and its methods: Publications in futu...... programme are (only) partly reduced by applying Causal Layered Analysis as an internal quality control. The following selective bibliography is focussed on these methodological issues...
Dane, C. Brent; Hackel, Lloyd; Harris, Fritz B.
A laser system, such as a master oscillator/power amplifier system, comprises a gain medium and a stimulated Brillouin scattering SBS mirror system. The SBS mirror system includes an in situ filtered SBS medium that comprises a compound having a small negative non-linear index of refraction, such as a perfluoro compound. An SBS relay telescope having a telescope focal point includes a baffle at the telescope focal point which blocks off angle beams. A beam splitter is placed between the SBS mirror system and the SBS relay telescope, directing a fraction of the beam to an alternate beam path for an alignment fiducial. The SBS mirror system has a collimated SBS cell and a focused SBS cell. An adjustable attenuator is placed between the collimated SBS cell and the focused SBS cell, by which pulse width of the reflected beam can be adjusted.
In this paper the magnetic leakage flux and eddy current method were used to evaluate changes of materials' properties caused by stress. Seven samples made of ferromagnetic material with different level of applied stress were prepared. First, the leakage magnetic fields were measured by scanning the surface of the specimens with GMR gradiometer. Next, the same samples were evaluated using an eddy current sensor. A comparison between results obtained from both methods was carried out. Finally, selected parameters of the measured signal were calculated and utilized to evaluate level of the applied stress. A strong coincidence between amount of the applied stress and the maximum amplitude of the derivative was confirmed
Slater, T. F.; Elfring, L.; Novodvorsky, I.; Talanquer, V.; Quintenz, J.
Science education reform documents universally call for students to have authentic and meaningful experiences using real data in the context of their science education. The underlying philosophical position is that students analyzing data can have experiences that mimic actual research. In short, research experiences that reflect the scientific spirit of inquiry potentially can: prepare students to address real world complex problems; develop students' ability to use scientific methods; prepare students to critically evaluate the validity of data or evidence and of the consequent interpretations or conclusions; teach quantitative skills, technical methods, and scientific concepts; increase verbal, written, and graphical communication skills; and train students in the values and ethics of working with scientific data. However, it is unclear what the broader pre-service teacher preparation community is doing in preparing future teachers to promote, manage, and successful facilitate their own students in conducting authentic scientific inquiry. Surveys of undergraduates in secondary science education programs suggests that students have had almost no experiences themselves in conducting open scientific inquiry where they develop researchable questions, design strategies to pursue evidence, and communicate data-based conclusions. In response, the College of Science Teacher Preparation Program at the University of Arizona requires all students enrolled in its various science teaching methods courses to complete an open inquiry research project and defend their findings at a specially designed inquiry science mini-conference at the end of the term. End-of-term surveys show that students enjoy their research experience and believe that this experience enhances their ability to facilitate their own future students in conducting open inquiry.
Sidik, S. M.
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
Day-Lewis, F. D.; Johnson, C.D.; Slater, L.D.; Robinson, J.L.; Williams, J.H.; Boyden, C.L.; Werkema, D.D.; Lane, J.W.
Geophysical technologies have the potential to improve site characterization and monitoring in fractured rock, but the appropriate and effective application of geophysics at a particular site strongly depends on project goals (e.g., identifying discrete fractures) and site characteristics (e.g., lithology). No method works at every site or for every goal. New approaches are needed to identify a set of geophysical methods appropriate to specific project goals and site conditions while considering budget constraints. To this end, we present the Excel-based Fractured-Rock Geophysical Toolbox Method Selection Tool (FRGT-MST). We envision the FRGT-MST (1) equipping remediation professionals with a tool to understand what is likely to be realistic and cost-effective when contracting geophysical services, and (2) reducing applications of geophysics with unrealistic objectives or where methods are likely to fail.
Larsén, Xiaoli Guo; Badger, Jake; Hahmann, Andrea N.
A selective dynamical downscaling method is developed to obtain extreme-wind atlases for large areas. The method is general, efficient and flexible. The method consists of three steps: (i) identifying storm episodes for a particular area, (ii) downscaling of the storms using mesoscale modelling...... and (iii) post-processing. The post-processing generalizes the winds from the mesoscale modelling to standard conditions, i.e. 10-m height over a homogeneous surface with roughness length of 5 cm. The generalized winds are then used to calculate the 50-year wind using the annual maximum method for each...... mesoscale grid point. The generalization of the mesoscale winds through the post-processing provides a framework for data validation and for applying further the mesoscale extreme winds at specific places using microscale modelling. The results are compared with measurements from two areas with different...
Tereshin, G.S.; Kharitonova, L.K.; Kuznetsova, O.B.
Heterogeneous systems Y(NO 3 ) 3 (YCl 3 )-Hsub(n)L-KNO 3 (KCl)-H 2 O are investigated by potentiometric titration (with coulomb-meter generation of oH - ions). Hsub(n)L is one of the following: oxyethylidendiphosphonic; aminobenzilidendiphosphonic; glycine-bis-methyl-phosphonic; nitrilotrimethylphosphonic (H 6 L) and ethylenediaminetetramethylphosphonic acids. The range of the exsistence of YHsub(nL3)LxyH 2 O has been determined. The possibility of using potentiometric titration for investigating heterogeneous systems is demonstrated by the stUdy of the system Y(NO 3 ) 3 -H 6 L-KOH-H 2 o by the method of residual concentration. The two methods have shown that at pH 3 LxyH 2 O; at pH=6, KYH 2 Lxy'H 2 O, and at pH=7, K 2 YHLxy''H 2 O. The complete solubility products of nitrilotrimethylphosphonates are evaluated
Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA
Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine
Koh, Chung-Yan; Piccini, Matthew E.; Singh, Anup K.
Examples are described including measurement systems for conducting competition assays. A first chamber of an assay device may be loaded with a sample containing a target antigen. The target antigen in the sample may be allowed to bind to antibody-coated beads in the first chamber. A control layer separating the first chamber from a second chamber may then be opened to allow a labeling agent loaded in a first portion of the second chamber to bind to any unoccupied sites on the antibodies. A centrifugal force may then be applied to transport the beads through a density media to a detection region for measurement by a detection unit.
Koh, Chung-Yan; Piccini, Matthew E.; Singh, Anup K.
Examples are described including measurement systems for conducting competition assays. A first chamber of an assay device may be loaded with a sample containing a target antigen. The target antigen in the sample may be allowed to bind to antibody-coated beads in the first chamber. A control layer separating the first chamber from a second chamber may then be opened to allow a labeling agent loaded in a first portion of the second chamber to bind to any unoccupied sites on the antibodies. A centrifugal force may then be applied to transport the beads through a density media to a detection region for measurement by a detection unit.
Milton, Kimball A
This is a graduate level textbook on the theory of electromagnetic radiation and its application to waveguides, transmission lines, accelerator physics and synchrotron radiation. It has grown out of lectures and manuscripts by Julian Schwinger prepared during the war at MIT's Radiation Laboratory, updated with material developed by Schwinger at UCLA in the 1970s and 1980s, and by Milton at the University of Oklahoma since 1994. The book includes a great number of straightforward and challenging exercises and problems. It is addressed to students in physics, electrical engineering, and applied mathematics seeking a thorough introduction to electromagnetism with emphasis on radiation theory and its applications.
Mimori, T.; Miyajima, K.; Kozeki, M.; Kubota, T.; Tusa, E.; Keskinen, A.
A nuclide removal system was designed for treatment of liquid radioactive waste at the Japan Atomic Energy Research Institute (JAERI) Tokai site. Total system will include removal of plutonium, cesium and strontium. Removal of plutonium will be carried out by a method developed by JAERI. Removal of cesium and strontium will be carried out by the methods developed in Finland. The whole project will be implemented for JAERI in cooperation between Mitsui Engineering and Shipbuilding and IVO International. This project has been carried out under the Science and Technology Agency (STA) of Japan. The liquid to be treated includes 7.4x10 9 Bq/L of cesium and 7.4x10 9 Bq/L of strontium. The amount of alpha nuclides is 3.7x10 6 Bq/L. Nitric acid concentration is 1.74 mol/L. The volume of 11,000 liters had to be treated in 200 batches of operation. Removal of cesium and strontium is based on the use of new ion exchange materials developed in Finland. These inorganic ion exchange materials have extremely good properties to separate cesium and strontium from even very difficult liquids. Ion exchange material will be used in columns, where there are materials both for cesium and strontium. According to column tests with simulated waste, one 2 liter column will effectively reach the required DF during 10 batches of operation. Purified liquid can be led to further liquid treatment at the site. After treatment of liquids, both used particle filters and used ion exchange columns will be drained and stored to wait for final treatment and disposal. The designed treatment system has a special beneficial feature as it does not produce secondary waste. Final waste is in the form of particle filters or ion exchange columns with material. Used ion exchange columns and filters will be replaced with new ones by means of remote handling. Construction of the treatment system will be scheduled to commence in FY1995 and assemblying at the site in FY1996. (J.P.N.)
Reyes Santos, Joost; Haimes, Yacov Y
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
Garg, Anil K; Garg, Seema
The evidence suggests that our perception of physical beauty is based on how closely the features of one's face reflect phi (the golden ratio) in their proportions. By that extension, it must certainly be possible to use a mathematical parameter to design an anterior hairline in all faces. To establish a user-friendly method to design an anterior hairline in cases of male pattern alopecia. We need a flexible measuring tape and skin marker. A reference point A (glabella) is taken in between eyebrows. Mark point E, near the lateral canthus, 8 cm horizontal on either side from the central point A. A mid-frontal point (point B) is marked 8 cm from point A on the forehead in a mid-vertical plane. The frontotemporal points (C and C') are marked on the frontotemporal area, 8 cm in a horizontal plane from point B and 8 cm in a vertical plane from point E. The temporal peak points (D and D') are marked on the line joining the frontotemporal point C to the lateral canthus point E, slightly more than halfway toward lateral canthus, usually 5 cm from the frontotemporal point C. This line makes an anterior border of the temporal triangle. We have conducted a study with 431 cases of male pattern alopecia. The average distance of the mid-frontal point from glabella was 7.9 cm. The patient satisfaction reported was 94.7%. Our method gives a skeletal frame of the anterior hairline with minimal criteria, with no need of visual imagination and experience of the surgeon. It automatically takes care of the curvature of the forehead and is easy to use for a novice surgeon.
Crandall, David Lynn
Sighting optics include a front sight and a rear sight positioned in a spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus for a user images of the front sight and the target.
Yuasa, Y.; Ohta, M.; Watanabe, A.; Tani, A.; Takashima, N.
Selective adsorption-desorption method has been developed as an effective means of enriching krypton and xenon gases. A seriesof laboratory-scale tests were performed to provide some basic data of the method when applied to off-gas streams of nuclear power plants. For the first step of the enrichment process of the experiments, krypton was adsorbed on solid adsorbents from dilute mixtures with air at temperatures ranging from -50 0 C to -170 0 C. After the complete breakthrough was obtained, the adsorption bed was evacuated at low temperature by a vacuum pump. By combining these two steps krypton was highly enriched on the adsorbents, and the enrichment factor for krypton was calculated as the product of individual enrichment factors of each step. Two types of adsorbents, coconut charcoal and molecular sieves 5A, were used. Experimental results showed that the present method gave the greater enrichment factor than the conventional method which used selective adsorption step only. (U.S.)
Kauvar, Arielle N B; Cronin, Terrence; Roenigk, Randall; Hruza, George; Bennett, Richard
Basal cell carcinoma (BCC) is the most common cancer in the US population affecting approximately 2.8 million people per year. Basal cell carcinomas are usually slow-growing and rarely metastasize, but they do cause localized tissue destruction, compromised function, and cosmetic disfigurement. To provide clinicians with guidelines for the management of BCC based on evidence from a comprehensive literature review, and consensus among the authors. An extensive review of the medical literature was conducted to evaluate the optimal treatment methods for cutaneous BCC, taking into consideration cure rates, recurrence rates, aesthetic and functional outcomes, and cost-effectiveness of the procedures. Surgical approaches provide the best outcomes for BCCs. Mohs micrographic surgery provides the highest cure rates while maximizing tissue preservation, maintenance of function, and cosmesis. Mohs micrographic surgery is an efficient and cost-effective procedure and remains the treatment of choice for high-risk BCCs and for those in cosmetically sensitive locations. Nonsurgical modalities may be used for low-risk BCCs when surgery is contraindicated or impractical, but the cure rates are lower.
Full Text Available Following paper describes selected tools and methods from Quality management field and their practical applications on defined examples. Solved examples were elaborated in the form of electronic support. This in detail elaborated electronic support provides students opportunity to thoroughly practice specific issues, help them to prepare for exams and consequently will lead to education improvement. Especially students of combined study form will appreciate this support. The paper specifies project objectives, subjects that will be covered by mentioned support, target groups, structure and the way of elaboration of electronic exercise book in view. The emphasis is not only on manual solution of selected examples that may help students to understand the principles and relationships, but also on solving and results interpreting of selected examples using software support. Statistic software Statgraphics Plus v 5.0 is used while working support, because it is free to use for all students of the faculty. Exemplary example from the subject Basic Statistical Methods of Quality Management is also part of this paper.
Sasaki, Ryoichi; Komoda, Norihisa.
Object: To extend the period of replacement of a filter for adsorbing radioactive material by discharging waste gas containing radioactive material produced from an atomic power equipment after treating it by a method selected on the basis of the results of measurement of wind direction. Structure: Exhaust gas containing radioactive material produced from atomic power equipment is discharged after it is treated by a method selected on the basis of the results of wind direction measurement. For Instance, in case of sea wind the waste gas passes through a route selected for this case and is discharged through the waste gas outlet. When the sea wind disappears (that is, when a land wind or calm sets in), the exhaust gas is switched to a route for the case other than that of the sea wind, so that it passes through a filter consisting of active carbon where the radioactive material is removed through adsorption. The waste gas now free from the radioactive material is discharged through the waste gas outlet. (Moriyama, K.)
Ustinov, A; Khayrullina, A; Khmelik, M; Sveshnikova, A; Borzenko, V
Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia. (paper)
Ustinov, A.; Khayrullina, A.; Borzenko, V.; Khmelik, M.; Sveshnikova, A.
Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia.
Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This paper aims on a comparative study of state-of-the- art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k-nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t-statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used in the experiments. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.
Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This paper aims on a comparative study of state-of-the- art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k-nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t-statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used in the experiments. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.
Anikin, I. V.; Alnajjar, Kh
In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.
Beran, M.; Klosova, E.; Krtil, J.; Sus, F.; Kuvik, V.; Vrbova, L.; Hamplova, M.; Lengyel, J.; Kelnar, L.; Zakouril, K.
The Central Testing Laboratory of the Nuclear Research Institute at Rez has for a decade been participating in the development of analytical procedures and has been providing analyses of samples of different types and origin. The analytical procedures developed have been published in special journals and a number of them in the Manuals of analytical methods, in three parts. The 4th part of the Manual contains selected physico-chemical methods developed or modified by the Laboratory in the years 1986-1990 within the project ''Development of physico-chemical analytical methods''. In most cases, techniques are involved for non-nuclear applications. Some can find wider applications, especially in analyses of environmental samples. Others have been developed for specific cases of sample analyses or require special instrumentation (mass spectrometer), which partly restricts their applicability by other institutions. (author)
Александр Иванович МЕНЕЙЛЮК
Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.
Munck, Allan; Madsen, Jan
analyses of the actual needs and the available tools. Grouping needs into categories, allow us to obtain a comprehensive set of requirements for the tools. The entire model-based systems engineering discipline was categorized for a modeling tool case to enable development of a tool specification...... in successful operation since 2013 at GN Hearing. We further utilized the method to select a set of tools that we used on pilot cases at GN Hearing for modeling, simulating and formally verifying embedded systems.......The complexity of many types of systems has grown considerably over the last decades. Using appropriate systems engineering tools therefore becomes increasingly important. Starting the tool selection process can be intimidating because organizations often only have a vague idea about what they need...
Full Text Available The variations in ground beetles (Coleoptera: Carabidae assemblages across the three types of farmland habitats, arable land, meadows and woody vegetation were studied in relation to vegetation cover structure, intensity of agrotechnical interventions and selected soil properties. Material was pitfall trapped in 2010 and 2011 on twelve sites of the agricultural landscape in the Prešov town and its near vicinity, Eastern Slovakia. A total of 14,763 ground beetle individuals were entrapped. Material collection resulted into 92 Carabidae species, with the following six species dominating: Poecilus cupreus, Pterostichus melanarius, Pseudoophonus rufipes, Brachinus crepitans, Anchomenus dorsalis and Poecilus versicolor. Studied habitats differed significantly in the number of entrapped individuals, activity abundance as well as representation of the carabids according to their habitat preferences and ability to fly. However, no significant distinction was observed in the diversity, evenness neither dominance. The most significant environmental variables affecting Carabidae assemblages species variability were soil moisture and herb layer 0-20 cm. Another best variables selected by the forward selection were intensity of agrotechnical interventions, humus content and shrub vegetation. The other from selected soil properties seem to have just secondary meaning for the adult carabids. Environmental variables have the strongest effect on the habitat specialists, whereas ground beetles without special requirements to the habitat quality seem to be affected by the studied environmental variables just little.
Fu, Feng; Wang, Long; Nowak, Martin A.; Hauert, Christoph
Investigating the evolutionary dynamics of game theoretical interactions in populations where individuals are arranged on a graph can be challenging in terms of computation time. Here, we propose an efficient method to study any type of game on arbitrary graph structures for weak selection. In this limit, evolutionary game dynamics represents a first-order correction to neutral evolution. Spatial correlations can be empirically determined under neutral evolution and provide the basis for formulating the game dynamics as a discrete Markov process by incorporating a detailed description of the microscopic dynamics based on the neutral correlations. This framework is then applied to one of the most intriguing questions in evolutionary biology: the evolution of cooperation. We demonstrate that the degree heterogeneity of a graph impedes cooperation and that the success of tit for tat depends not only on the number of rounds but also on the degree of the graph. Moreover, considering the mutation-selection equilibrium shows that the symmetry of the stationary distribution of states under weak selection is skewed in favor of defectors for larger selection strengths. In particular, degree heterogeneity—a prominent feature of scale-free networks—generally results in a more pronounced increase in the critical benefit-to-cost ratio required for evolution to favor cooperation as compared to regular graphs. This conclusion is corroborated by an analysis of the effects of population structures on the fixation probabilities of strategies in general 2×2 games for different types of graphs. Computer simulations confirm the predictive power of our method and illustrate the improved accuracy as compared to previous studies.
Davis, M.C.; Turner, D.A.; Charters, J.R.; Golden, H.E.; Ali, A.; Fordham, E.W.
Various authors have advocated quantitative methods of evaluating bone scintigrams to detect sacroiliitis, while others have not found them useful. Many explanations for this disagreement have been offered, including differences in the method of case selection, ethnicity, gender, and previous drug therapy. It would appear that one of the most important impediments to consistent results is the variability of selecting sacroiliac joint and reference regions of interest (ROIs). The effect of ROI selection would seem particularly important because of the normal variability of radioactivity within the reference regions that have been used (sacrum, spine, iliac wing) and the inhomogeneity of activity in the SI joints. We have investigated the effect of ROI selection, using five different methods representative of, though not necessarily identical to, those found in the literature. Each method produced unique mean indices that were different for patients with ankylosing spondylitis (AS) and controls. The method of Ayres (19) proved superior (largest mean difference, smallest variance), but none worked well as a diagnostic tool because of substantial overlap of the distributions of indices of patient and control groups. We conclude that ROI selection is important in determining results, and quantitative scintigraphic methods in general are not effective tools for diagnosing AS. Among the possible factors limiting success, difficulty in selecting a stable reference area seems of particular importance
Narwaria, Manish; Mantel, Claire; Da Silva, Matthieu Perreira
With the aim of improving the immersive experience of the end user, High Dynamic Range (HDR) imaging has been gaining popularity. Therefore, proper validation and performance benchmarking of HDR processing algorithms is a key step towards standardization and commercial deployment. A crucial...... component of such validation studies is the selection of a challenging and balanced set of source (reference) HDR content. In order to facilitate this, we present an objective method based on the premise that a more challenging HDR scene encapsulates higher contrast, and as a result will show up more...
笠嶋, 善憲; Kasashima, Yoshinori
A frequency Selective shielding (FSS) glass is a print of many same size antennas on a sheet of glass, and it has high shielding properties for one specific frequency. This time, the author analyzed characteristics of various FSSs whose antenna types are different by FDTD method. The antenna types are cross dipole, circular loop, square loop, circular patch, and square patch. As the result, the FSSs can be composed of the various types of the antennas, and the FSSs have broad-band shielding c...
Marques, Joselene; Igel, Christian; Lillholm, Martin
We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....
Full Text Available This issue of the CLEIej consists of three main parts: i a review paper on the state of the art of how contextual information extracted from a user task can help to improve searches for contents relevant to this task; ii extended and revised versions of Selected Papers (which correspond to the second and third best paper from each track presented at the XX Ibero-American Conference on Software Engineering (CIbSE 2017, which took place in Buenos Aires, Argentina, in May 2017; and, iii extended and revised versions of selected papers from LACLO 2016, the XI Latin American Conference on Learning Objects and Technology, which took place in San José, Costa Rica, in October 2016.
Full Text Available The study of speech timing, i.e. the duration and speed or tempo of speech events, has increased in importance over the past twenty years, in particular in connection with increased demands for accuracy, intelligibility and naturalness in speech technology, with applications in language teaching and testing, and with the study of speech timing patterns in language typology. H owever, the methods used in such studies are very diverse, and so far there is no accessible overview of these methods. Since the field is too broad for us to provide an exhaustive account, we have made two choices: first, to provide a framework of paradigmatic (classificatory, syntagmatic (compositional and functional (discourse-oriented dimensions for duration analysis; and second, to provide worked examples of a selection of methods associated primarily with these three dimensions. Some of the methods which are covered are established state-of-the-art approaches (e.g. the paradigmatic Classification and Regression Trees, CART , analysis, others are discussed in a critical light (e.g. so-called ‘rhythm metrics’. A set of syntagmatic approaches applies to the tokenisation and tree parsing of duration hierarchies, based on speech annotations, and a functional approach describes duration distributions with sociolinguistic variables. Several of the methods are supported by a new web-based software tool for analysing annotated speech data, the Time Group Analyser.
Santos, Rui C.; Leal, Joao P.; Martinho Simoes, Jose A.
A revised parameterization of the extended Laidler method for predicting standard molar enthalpies of atomization and standard molar enthalpies of formation at T = 298.15 K for several families of hydrocarbons (alkanes, alkenes, alkynes, polyenes, poly-ynes, cycloalkanes, substituted cycloalkanes, cycloalkenes, substituted cycloalkenes, benzene derivatives, and bi and polyphenyls) is presented. Data for a total of 265 gas-phase and 242 liquid-phase compounds were used for the calculation of the parameters. Comparison of the experimental values with those obtained using the additive scheme led to an average absolute difference of 0.73 kJ . mol -1 for the gas-phase standard molar enthalpy of formation and 0.79 kJ . mol -1 for the liquid-phase standard molar enthalpy of formation. The database used to establish the parameters was carefully reviewed by using, whenever possible, the original publications. A worksheet to simplify the calculation of standard molar enthalpies of formation and standard molar enthalpies of atomization at T = 298.15 K based on the extended Laidler parameters defined in this paper is provided as supplementary material.
Vitorino, Sidney L.
Full Text Available The objective of this report is to assess how the multi-criteria method Analytic Hierarchy Process [HP] can help a hospital complex to choose a more suitable management system, known as Enterprise Resource Planning (ERP. The choice coated is very complex due to the novelty of the process of choosing and conflicts generated between areas that did not have a single view of organizational needs, generating a lot of pressure in the department responsible for implementing systems. To assist in this process, he was hired an expert consultant in decision-making and AHP, which in its role of facilitator, contributed to the criteria for system selection were defined, and the choice to occur within a consensual process. We used the study of a single case, based on two indepth interviews with the consultant and the project manager, and documents generated by the advisory and the tool that supported the method. The results of this analysis showed that the method could effectively collaborate in the system acquisition process, but knowledge of the problems of employees and senior management support, it was not used in new decisions of the organization. We conclude that this method contributed to the consensus in the procurement process, team commitment and engagement of those involved.
Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)
The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.
Zhongqin, G.; Chen, Y.
Abstract Quickly identify the spatial distribution of landslides automatically is essential for the prevention, mitigation and assessment of the landslide hazard. It's still a challenging job owing to the complicated characteristics and vague boundary of the landslide areas on the image. The high resolution remote sensing image has multi-scales, complex spatial distribution and abundant features, the object-oriented image classification methods can make full use of the above information and thus effectively detect the landslides after the hazard happened. In this research we present a new semi-supervised workflow, taking advantages of recent object-oriented image analysis and machine learning algorithms to quick locate the different origins of landslides of some areas on the southwest part of China. Besides a sequence of image segmentation, feature selection, object classification and error test, this workflow ensemble the feature selection and classifier selection. The feature this study utilized were normalized difference vegetation index (NDVI) change, textural feature derived from the gray level co-occurrence matrices (GLCM), spectral feature and etc. The improvement of this study shows this algorithm significantly removes some redundant feature and the classifiers get fully used. All these improvements lead to a higher accuracy on the determination of the shape of landslides on the high resolution remote sensing image, in particular the flexibility aimed at different kinds of landslides.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Valdez, Carlos A.; Vu, Alexander K.
Provided herein are methods for selectively detecting an alkyne-presenting molecule in a sample and related detection reagents, compositions, methods and systems. The methods include contacting a detection reagent with the sample for a time and under a condition to allow binding of the detection reagent to the one or more alkyne-presenting molecules possibly present in the matrix to the detection reagent. The detection reagent includes an organic label moiety presenting an azide group. The binding of the azide group to the alkyne-presenting molecules results in emission of a signal from the organic label moiety.
Stahl, Shannon S; Rahimi, Alireza
Described is a method to oxidize lignin or lignin sub-units. The method includes oxidation of secondary benzylic alcohol in the lignin or lignin sub-unit to a corresponding ketone in the presence of unprotected primarily aliphatic alcohol in the lignin or lignin sub-unit. The optimal catalyst system consists of HNO.sub.3 in combination with another Bronsted acid, in the absence of a metal-containing catalyst, thereby yielding a selectively oxidized lignin or lignin sub-unit. The method may be carried out in the presence or absence of additional reagents including TEMPO and TEMPO derivatives.
Fikioris, George; Bakas, Odysseas N
This book describes and illustrates the application of several asymptotic methods that have proved useful in the authors' research in electromagnetics and antennas. We first define asymptotic approximations and expansions and explain these concepts in detail. We then develop certain prerequisites from complex analysis such as power series, multivalued functions (including the concepts of branch points and branch cuts), and the all-important gamma function. Of particular importance is the idea of analytic continuation (of functions of a single complex variable); our discussions here include som
Full Text Available The purpose of this paper is to present a new design method of lightweight parts fabricated by selective laser melting (SLM based on the “Skin-Frame” and to explore the influence of fabrication defects on SLM parts with different sizes. Some standard lattice parts were designed according to the Chinese GB/T 1452-2005 standard and manufactured by SLM. Then these samples were tested in an MTS Insight 30 compression testing machine to study the trends of the yield process with different structure sizes. A set of standard cylinder samples were also designed according to the Chinese GB/T 228-2010 standard. These samples, which were made of iron-nickel alloy (IN718, were also processed by SLM, and then tested in the universal material testing machine INSTRON 1346 to obtain their tensile strength. Furthermore, a lightweight redesigned method was researched. Then some common parts such as a stopper and connecting plate were redesigned using this method. These redesigned parts were fabricated and some application tests have already been performed. The compression testing results show that when the minimum structure size is larger than 1.5 mm, the mechanical characteristics will hardly be affected by process defects. The cylinder parts were fractured by the universal material testing machine at about 1069.6 MPa. These redesigned parts worked well in application tests, with both the weight and fabrication time of these parts reduced more than 20%.
Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M.
Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)
García-Guzmán, Juan J; Hernández-Artiga, María P; Palacios-Ponce de León, Lourdes; Bellido-Milla, Dolores
A critical review to the methods recommended by international bodies and widely used in the winery industry and research studies was performed. A Laccase biosensor was applied to the selective determination of polyphenols in wines. The biosensor response was characterised and it responds mainly to o-diphenols which are the principal polyphenols responsible for the stability and sensory qualities of wines. The spectrophotometric method to determine free and total sulphur dioxide recommended for beers was applied directly to wines. A sampling of 14 red and white wines was performed and they were analysed for biosensor polyphenol index (IBP) and sulphur dioxide concentration (SO2). The antioxidant capacity by the ABTS(+) spectrophotometric method was also determined. A correlation study was performed to elucidate the influence of the polyphenols and SO2 on the wines stability. High correlations were found between IBP and antioxidant capacity and low correlation between SO2 and antioxidant capacity. To evaluate the benefits of wine drinking a new parameter (IBP/SO2) is proposed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M., E-mail: email@example.com [Universidad Autonoma de Zacatecas, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico)
Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)
Full Text Available Hazardous waste is substance that threaten people and environment in case of improper storage, disposal and transport due to its concentration, physical and chemical properties. Companies producing hazardous waste as a result of several activities mostly do not have any own disposal facilities. In addition, they do not pay attention enough to determine the right contractor as a disposal facility. On the other hand, there are various qualitative and quantitative criteria affecting the selection of the contractor and conflicting with each other. The aim of the performed study is to assist one of these companies producing hazardous waste in the selection of the best contractor that eliminates hazardous waste economic and harmless way. In the study, contractor weights in percentage is calculated by using Analytic Network Process (ANP as one of the multi-criteria decision making (MCDM methods and widely used in the literature which considers both qualitative and quantitative criteria. In the next step, by the help of the mathematical model, contractors that will be given which type of hazardous waste are identified. This integrated approach can be used as a guide for similar firms.
Klu, Y A K; Chen, J
This study observed the behaviour of probiotics and selected bacterial pathogens co-inoculated into peanut butter during gastrointestinal simulation. Peanut butter homogenates co-inoculated with Salmonella/Listeria strains (5 log CFU ml(-1) ) and lyophilized or cultured probiotics (9 log CFU ml(-1) ) were exposed to simulated gastrointestinal conditions for 24 h at 37°C. Sample pH, titratable acidity and pathogen populations were determined. Agar diffusion assay was performed to assess the inhibitory effect of probiotic culture supernatants with either natural (3·80 (Lactobacillus), 3·78 (Bifidobacteirum) and 5·17 (Streptococcus/Lactococcus)) or neutralized (6·0) pH. Antibacterial effect of crude bacteriocin extracts were also evaluated against the pathogens. After 24 h, samples with probiotics had lower pH and higher titratable acidity than those without probiotics. The presence of probiotics caused a significant reduction (P Probiotics in 'peanut butter' survived simulated gastrointestinal conditions and inhibited the growth of Salmonella/Listeria. Peanut butter is a plausible carrier to deliver probiotics to improve the gastrointestinal health of children in developing countries. © 2016 The Society for Applied Microbiology.
Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.
Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.
Tan, Maxine; Pu, Jiantao; Zheng, Bin
Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized
In the oil snubbers used in the high radiation environment of nuclear power stations, gas generation from oil and the deterioration of rubber material for sealing occur due to radiation damage, therefore periodical inspection and replacement are required during operation. The mechanical snubbers developed as aseismatic supporters in place of oil snubbers have entered the stage of practical use, and are made by two companies in USA and a company in Japan. Their features as compared with oil snubbers are as follows. The cost and time required for the maintenance were made as small as possible because the increase of the service life of mechanical components can be expected. The temperature dependence of mechanical snubbers is small. The matters demanding attention in the maintenance are the secular change of lubricating oil and the effect of radiation, and the rust prevention of ball screw bearings. These problems are being studied by Power Reactor and Nuclear Fuel Development Corp. for the fast prototype reactor Monju. The structural feature is to convert the thrust movement of equipments and pipings due to thermal expansion and contraction or earthquakes into rotating motion, using ball screws. The features and the construction of SMS type mechanical snubbers, the test and inspection prior to their shipping, the method of selection, and the method of handling them in actual places are explained. (Kako, I.)
McShane, M.J.; Cameron, B.D.; Cote, G.L.; Motamedi, M.; Spiegelman, C.H.
A new stepwise approach to variable selection for spectroscopy that includes chemical information and attempts to test several spectral regions producing high ranking coefficients has been developed to improve on currently available methods. Existing selection techniques can, in general, be placed into two groups: the first, time-consuming optimization approaches that ignore available information about sample chemistry and require considerable expertise to arrive at appropriate solutions (e.g. genetic algorithms), and the second, stepwise procedures that tend to select many variables in the same area containing redundant information. The algorithm described here is a fast stepwise procedure that uses multiple ranking chains to identify several spectral regions correlated with known sample properties. The multiple-chain approach allows the generation of a final ranking vector that moves quickly away from the initial selection point, testing several areas exhibiting correlation between spectra and composition early in the stepping procedure. Quantitative evidence of the success of this approach as applied to Raman spectroscopy is given in terms of processing speed, number of selected variables, and prediction error in comparison with other selection methods. In this respect, the procedure described here may be considered as a significant evolutionary step in variable selection algorithms. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Oñate, Eugenio; García-Espinosa, Julio; Kvamsdal, Trond; Bergan, Pål; MARINE 2011
This book contains selected papers from the Fourth International Conference on Computational Methods in Marine Engineering, held at Instituto Superior Técnico, Technical University of Lisbon, Portugal in September 2011. Nowadays, computational methods are an essential tool of engineering, which includes a major field of interest in marine applications, such as the maritime and offshore industries and engineering challenges related to the marine environment and renewable energies. The 2011 Conference included 8 invited plenary lectures and 86 presentations distributed through 10 thematic sessions that covered many of the most relevant topics of marine engineering today. This book contains 16 selected papers from the Conference that cover “CFD for Offshore Applications”, “Fluid-Structure Interaction”, “Isogeometric Methods for Marine Engineering”, “Marine/Offshore Renewable Energy”, “Maneuvering and Seakeeping”, “Propulsion and Cavitation” and “Ship Hydrodynamics”. The papers we...
Nakaya, Akihiro; Isobe, Sachiko N.
Background Genomic selection or genome-wide selection (GS) has been highlighted as a new approach for marker-assisted selection (MAS) in recent years. GS is a form of MAS that selects favourable individuals based on genomic estimated breeding values. Previous studies have suggested the utility of GS, especially for capturing small-effect quantitative trait loci, but GS has not become a popular methodology in the field of plant breeding, possibly because there is insufficient information avail...
Glenton, Claire; Lewin, Simon; Mayhew, Alain; Scheel, Inger; Odgaard-Jensen, Jan
Systematic reviews within the Cochrane Effective Practice and Organisation of Care Group (EPOC) can include both randomized and nonrandomized study designs. We explored how many EPOC reviews consider and identify nonrandomized studies, and whether the proportion of nonrandomized studies identified is linked to the review topic. We recorded the study designs considered in 65 EPOC reviews. For reviews that considered nonrandomized studies, we calculated the proportion of identified studies that were nonrandomized and explored whether there were differences in the proportion of nonrandomized studies according to the review topic. Fifty-one (78.5%) reviews considered nonrandomized studies. Forty-six of these reviews found nonrandomized studies, but the proportion varied a great deal (median, 33%; interquartile range, 25--50%). Reviews of health care delivery interventions had lower proportions of nonrandomized studies than those of financial and governance interventions. Most EPOC reviews consider nonrandomized studies, but the degree to which they find them varies. As nonrandomized studies are believed to be at higher risk of bias and their inclusion entails a considerable effort, review authors should consider whether the benefits justify the inclusion of these designs. Research should explore whether it is more useful to consider nonrandomized studies in reviews of some intervention types than others. Copyright © 2013 Elsevier Inc. All rights reserved.
Full Text Available Nowadays, the number of Web services on the Internet is quickly increasing. Meanwhile, different service providers offer numerous services with the similar functions. Quality of Service (QoS has become an important factor used to select the most appropriate service for users. The most prominent QoS-based service selection models only take the certain attributes into account, which is an ideal assumption. In the real world, there are a large number of uncertain factors. In particular, at the runtime, QoS may become very poor or unacceptable. In order to solve the problem, a global service selection model based on uncertain QoS was proposed, including the corresponding normalization and aggregation functions, and then a robust optimization model adopted to transform the model. Experiment results show that the proposed method can effectively select services with high robustness and optimality.
Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton on Hudson, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hoenicke, Dirk (Seebruck-Seeon, DE); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY)
A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.
Cederkvist, Karin; Jensen, Marina B; Holm, Peter E
Stormwater treatment facilities (STFs) are becoming increasingly widespread but knowledge on their performance is limited. This is due to difficulties in obtaining representative samples during storm events and documenting removal of the broad range of contaminants found in stormwater runoff. This paper presents a method to evaluate STFs by addition of synthetic runoff with representative concentrations of contaminant species, including the use of tracer for correction of removal rates for losses not caused by the STF. A list of organic and inorganic contaminant species, including trace elements representative of runoff from roads is suggested, as well as relevant concentration ranges. The method was used for adding contaminants to three different STFs including a curbstone extension with filter soil, a dual porosity filter, and six different permeable pavements. Evaluation of the method showed that it is possible to add a well-defined mixture of contaminants despite different field conditions by having a flexibly system, mixing different stock-solutions on site, and use bromide tracer for correction of outlet concentrations. Bromide recovery ranged from only 12% in one of the permeable pavements to 97% in the dual porosity filter, stressing the importance of including a conservative tracer for correction of contaminant retention values. The method is considered useful in future treatment performance testing of STFs. The observed performance of the STFs is presented in coming papers. Copyright © 2017 Elsevier Ltd. All rights reserved.
R. Praveena Priyadarsini
Full Text Available Privacy-preservation is a step in data mining that tries to safeguard sensitive information from unsanctioned disclosure and hence protecting individual data records and their privacy. There are various privacy preservation techniques like k-anonymity, l-diversity and t-closeness and data perturbation. In this paper k-anonymity privacy protection technique is applied to high dimensional datasets like adult and census. since, both the data sets are high dimensional, feature subset selection method like Gain Ratio is applied and the attributes of the datasets are ranked and low ranking attributes are filtered to form new reduced data subsets. K-anonymization privacy preservation technique is then applied on reduced datasets. The accuracy of the privacy preserved reduced datasets and the original datasets are compared for their accuracy on the two functionalities of data mining namely classification and clustering using naïve Bayesian and k-means algorithm respectively. Experimental results show that classification and clustering accuracy are comparatively the same for reduced k-anonym zed datasets and the original data sets.
Full Text Available The aim of the project was to analyze the behavior of military communication networks based on work with real data collected continuously since 2005. With regard to the nature and amount of the data, data mining methods were selected for the purpose of analyses and experiments. The quality of real data is often insufficient for an immediate analysis. The article presents the data cleaning operations which have been carried out with the aim to improve the input data sample to obtain reliable models. Gradually, by means of properly chosen SW, network models were developed to verify generally valid patterns of network behavior as a bulk service. Furthermore, unlike the commercially available communication networks simulators, the models designed allowed us to capture nonstandard models of network behavior under an increased load, verify the correct sizing of the network to the increased load, and thus test its reliability. Finally, based on previous experience, the models enabled us to predict emergency situations with a reasonable accuracy.
Sunaryo, Geni Rina; Sumijanto; Nurul L, Siti
The final objective of this research is to design the mini scale of desalination installation. It has been started from 1997/1998 and has been doing for this 3 years. Where the study on the assessment of various desalination system has been done in the first year and thermodynamic in the second year. In this third year, literatully study on material resistance from outside pressure has been done. The number of pressure for single evaporator flashing method is mainly depend on the temperature that applied in that system. In this paper, the configuration stage, the choice method of selecting material for main evaporator vessel, tube, tube plates, water boxes, pipework, and valves for multistage flash distillation will be described. The choice of selecting material for MSF is base on economical consideration, cheap, high resistance and easy to be maintained
Igor V. KONONENKO
Full Text Available Literature analysis of works that devoted to research of the selection a project management approach and development of effective methods for this problem solution is given. Mathematical model and method for selection of project management approach with fuzzy concepts of applicability of existing approaches are proposed. The selection is made of such approaches as the PMBOK Guide, the ISO21500 standard, the PRINCE2 methodology, the SWEBOK Guide, agile methodologies Scrum, XP, and Kanban. The number of project parameters which have a great impact on the result of the selection and measure of their impact is determined. Project parameters relate to information about the project, team, communication, critical project risks. They include the number of people involved in the project, the customer's experience with this project team, the project team's experience in this field, the project team's understanding of requirements, adapting ability, initiative, and others. The suggested method is considered on the example of its application for selection a project management approach to software development project.
Boulkedid, Rym; Abdoul, Hendy; Loustau, Marine; Sibony, Olivier; Alberti, Corinne
Delphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1) to describe reporting of the Delphi method to develop quality indicators, 2) to discuss specific methodological skills for quality indicators selection 3) to give guidance about this practice. Three electronic data bases were searched over a 30 years period (1978-2009). All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results) were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used). Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80) of studies reported response rates for all rounds, 60% (48/80) that feedback was given between rounds, 77% (62/80) the method used to achieve consensus and 57% (48/80) listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63%) with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31). In 40/70 (57%) studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40) of cases. Among 75 studies describing criteria to select quality indicators, 28 (37%) used validity and 17(23%) feasibility. The use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys.
Full Text Available OBJECTIVE: Delphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1 to describe reporting of the Delphi method to develop quality indicators, 2 to discuss specific methodological skills for quality indicators selection 3 to give guidance about this practice. METHODOLOGY AND MAIN FINDING: Three electronic data bases were searched over a 30 years period (1978-2009. All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used. Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80 of studies reported response rates for all rounds, 60% (48/80 that feedback was given between rounds, 77% (62/80 the method used to achieve consensus and 57% (48/80 listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63% with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31. In 40/70 (57% studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40 of cases. Among 75 studies describing criteria to select quality indicators, 28 (37% used validity and 17(23% feasibility. CONCLUSION: The use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys.
Vlada S. Sokolović
Full Text Available Introduction This paper presents a critical analysis of the signal processing flow carried out in a software GPS receiver and a critical comparison of different architectures for signal processing within the GPS receiver. A model of software receivers is shown. Based on the displayed model, a receiver has been realized in the MATLAB software package, in which the simulations of signal processing were carried out. The aim of this paper is to demonstrate the advantages and disadvantages of different methods of the synchronization of signals in the receiver, and to propose a solution acceptable for possible implementation. The signal processing flow was observed from the input circuit to the extraction of the bits of the navigation message. The entire signal processing was performed on the L1 signal and the data collected by the input circuit SE4110. A radio signal from the satellite was accepted with the input circuit, filtered and translated into a digital form. The input circuit ends with the hardware of the receiver. A digital signal from the input circuit is brought into the PC Pentium 4 (AMD 3000 + where the receiver is realized in Matlab. Model of software GPS receiver The first level of processing is signal acquisition. Signal acquisition was realized using the cyclic convolution. The acquisition process was carried out by measuring signals from satellites, and these parameters are passed to the next level of processing. The next level was done by monitoring the synchronization signal and extracting the navigation message bits. On the basis of the detection of the navigation message the receiver calculates the position of a satellite and then, based on the position of the satellite, its own position. Tracking of GPS signal synchronization In order to select the most acceptable method of signal synchronization in the receiver, different methods of signal synchronization are compared. The early-late-DLL (Delay Lock Loop, TDL (Tau Dither Loop
Calus, M.P.L.; Veerkamp, R.F.
Background Genomic selection has become a very important tool in animal genetics and is rapidly emerging in plant genetics. It holds the promise to be particularly beneficial to select for traits that are difficult or expensive to measure, such as traits that are measured in one environment and
Sun, Dan; Garmory, Andrew; Page, Gary J.
For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.
Hlavacek, I.; Hlavackova, I.
Issues of the inspection and control of bituminized and cemented waste are discussed, and some methods of their nondestructive testing are described. Attention is paid to the inspection techniques, non-nuclear spectral techniques in particular, as employed for quality control of the wastes, waste concentrates, spent waste leaching solutions, as well as for the examination of environmental samples (waters and soils) from the surroundings of nuclear power plants. Some leaching tests used abroad for this purpose and practical analyses by the ICP-AES technique are given by way of example. The ICP-MS technique, which is unavailable in the Czech Republic, is routinely employed abroad for alpha nuclide measurements; examples of such analyses are also given. The next topic discussed includes the monitoring of organic acids and complexants to determine the degree of their thermal decomposition during the bituminization of wastes on an industrial line. All of the methods and procedures highlighted can be used as technical support during the monitoring of radioactive waste properties in industrial conditions, in the chemical and radiochemical analyses of wastes and related matter, in the calibration of nondestructive testing instrumentation, in the monitoring of contamination of the surroundings of nuclear facilities, and in trace analysis. (author). 10 tabs., 1 fig., 14 refs
Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan
Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.
Millwater, H. R.; Cruse, T. A.
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
González-Peña, D; Knox, R V; MacNeil, M D; Rodriguez-Zas, S L
Four semen traits: volume (VOL), concentration (CON), progressive motility of spermatozoa (MOT), and abnormal spermatozoa (ABN) provide complementary information on boar fertility. Assessment of the impact of selection for semen traits is hindered by limited information on economic parameters. Objectives of this study were to estimate economic values for semen traits and to evaluate the genetic gain when these traits are incorporated into traditional selection strategies in a 3-tier system of swine production. Three-way (maternal nucleus lines A and B and paternal nucleus line C) and 4-way (additional paternal nucleus line D) crossbreeding schemes were compared. A novel population structure that accommodated selection for semen traits was developed. Three selection strategies were simulated. Selection Strategy I (baseline) encompassed selection for maternal traits: number of pigs born alive (NBA), litter birth weight (LBW), adjusted 21-d litter weight (A21), and number of pigs at 21 d (N21); and paternal traits: number of days to 113.5 kg (D113), backfat (BF), ADG, feed efficiency (FE), and carcass lean % (LEAN). Selection Strategy II included Strategy I and the number of usable semen doses per collection (DOSES), a function of the 4 semen traits. Selection Strategy III included Strategy I and the 4 semen traits individually. The estimated economic values of VOL, CON, MOT, ABN, and DOSES for 7 to 1 collections/wk ranged from $0.21 to $1.44/mL, $0.12 to $0.83/10 spermatozoa/mm, $0.61 to $12.66/%, -$0.53 to -$10.88/%, and $2.01 to $41.43/%, respectively. The decrease in the relative economic values of semen traits and DOSES with higher number of collections per wk was sharper between 1 and 2.33 collections/wk than between 2.33 and 7 collections/wk. The higher economic value of MOT and ABN relative to VOL and CON could be linked to the genetic variances and covariances of these traits. Average genetic gains for the maternal traits were comparable across strategies
Full Text Available This paper presents the issues related to the presence and removal of copper compounds from industrial effluents with including wastewater from plants involved in the production of printed circuit boards. Characterized the toxicological properties of selected copper compounds, described the applicable technological processes, sources of copper ions in the effluents and selected methods for their removal.
Sole purpose of supplier selection is not limited to get supply at low cost and at right time. Supplier selection is a strategic decision to fulfil company’s goal for long period of time at low risk. To accomplish this objective companies are moving from reactive buying to proactive buying to give more priority to co-creation of wealth with supplier/s. Considering this issue an attempt has been made in this paper to give systematic review of supplier selection and evaluation process from 2005...
Fan, Maomian; Roper, Shelly; Andrews, Carrie; Allman, Amity; Bruno, John; Kiel, Jonathan
...). This process has been used to select aptamers against different types of targets (Bacillus anthracis spores, Bacillus thuringiensis spores, MS-2 bacteriophage, ovalbumin, and botulinum neurotoxin...
Ananikov, V P; Khemchyan, L L; Ivanova, Yu V; Dilman, A D; Levin, V V; Bukhtiyarov, V I; Sorokin, A M; Prosvirin, I P; Romanenko, A V; Simonov, P A; Vatsadze, S Z; Medved'ko, A V; Nuriev, V N; Nenajdenko, V G; Shmatova, O I; Muzalevskiy, V M; Koptyug, I V; Kovtunov, K V; Zhivonitko, V V; Likholobov, V A
The challenges of the modern society and the growing demand of high-technology sectors of industrial production bring about a new phase in the development of organic synthesis. A cutting edge of modern synthetic methods is introduction of functional groups and more complex structural units into organic molecules with unprecedented control over the course of chemical transformation. Analysis of the state-of-the-art achievements in selective organic synthesis indicates the appearance of a new trend — the synthesis of organic molecules, biologically active compounds, pharmaceutical substances and smart materials with absolute selectivity. Most advanced approaches to organic synthesis anticipated in the near future can be defined as 'atomic precision' in chemical reactions. The present review considers selective methods of organic synthesis suitable for transformation of complex functionalized molecules under mild conditions. Selected key trends in the modern organic synthesis are considered including the preparation of organofluorine compounds, catalytic cross-coupling and oxidative cross-coupling reactions, atom-economic addition reactions, methathesis processes, oxidation and reduction reactions, synthesis of heterocyclic compounds, design of new homogeneous and heterogeneous catalytic systems, application of photocatalysis, scaling up synthetic procedures to industrial level and development of new approaches to investigation of mechanisms of catalytic reactions. The bibliography includes 840 references
Carlos A. Caceres
Full Text Available Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.
A pragmatic pairwise group-decision approach is applied to compare two regions in order to select the more suitable one for construction of nulcear power plants in the Kingdom of Saudi Arabia. The selection methodology is based on pairwise comparison by forced choice. The method facilitates rating of the regions or sites using simple calculations. Two regions, one close to Dhahran on the Arabian Gulf and another close to Jeddah on the Red Sea, are evaluated. No specific site in either region is considered at this stage. The comparison is based on a set of selection criteria which include (i) topography, (ii) geology, (iii) seismology, (iv) meteorology, (v) oceanography, (vi) hydrology and (vii) proximetry to oil and gas fields. The comparison shows that the Jeddah region is more suitable than the Dhahran region. (orig.)
Obrist, Marianna; Knoche, Hendrik; Basapur, Santosh
The scope of user experience supersedes the concept of usability and other performance oriented measures by including for example users' emotions, motivations and a strong focus on the context of use. The purpose of this tutorial is to motivate researchers and practitioners to think about...... the challenging questions around how to select and apply UX evaluation methods for different usage contexts, in particular for the "home" and "mobile" context, relevant for TV-based services. Next to a general understanding of UX evaluation and available methods, we will provide concrete UX evaluation case...
Hejlesen, Mads Mølholm
ring dynamics is presented based on the alignment of the vorticity vector with the principal axis of the strain rate tensor.A novel iterative implementation of the Brinkman penalisation method is introduced for the enforcement of a fluid-solid interface in re-meshed vortex methods. The iterative scheme...... is included to explicitly fulfil the kinematic constraints of the flow field. The high order, unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data, a novel analysis on the vortex...
Wiebe, Nicholas J P; Meyer, Irmtraud M
The prediction of functional RNA structures has attracted increased interest, as it allows us to study the potential functional roles of many genes. RNA structure prediction methods, however, assume that there is a unique functional RNA structure and also do not predict functional features required for in vivo folding. In order to understand how functional RNA structures form in vivo, we require sophisticated experiments or reliable prediction methods. So far, there exist only a few, experimentally validated transient RNA structures. On the computational side, there exist several computer programs which aim to predict the co-transcriptional folding pathway in vivo, but these make a range of simplifying assumptions and do not capture all features known to influence RNA folding in vivo. We want to investigate if evolutionarily related RNA genes fold in a similar way in vivo. To this end, we have developed a new computational method, Transat, which detects conserved helices of high statistical significance. We introduce the method, present a comprehensive performance evaluation and show that Transat is able to predict the structural features of known reference structures including pseudo-knotted ones as well as those of known alternative structural configurations. Transat can also identify unstructured sub-sequences bound by other molecules and provides evidence for new helices which may define folding pathways, supporting the notion that homologous RNA sequence not only assume a similar reference RNA structure, but also fold similarly. Finally, we show that the structural features predicted by Transat differ from those assuming thermodynamic equilibrium. Unlike the existing methods for predicting folding pathways, our method works in a comparative way. This has the disadvantage of not being able to predict features as function of time, but has the considerable advantage of highlighting conserved features and of not requiring a detailed knowledge of the cellular
Bresson, G.; Lacourly, G.; Fitoussi, L.
The selection of the site of a nuclear plant most often comes to seek out and compound between two trends: that of the operator who will try and reduce the cost price of his product to the lowest and that of the protectionist who will try and reduce to the minimum the hazards resulting from the plant operation. Such a compromise is the result of a more or less empirical choice, which enters within the frame of a cost-benefit analysis, in which theoretically, the choice between several possible solutions is made of the selection giving the higher advantage [fr
Ban, Chunmei; Wu, Zhuangchun; Dillon, Anne C.
An electrode (110) is provided that may be used in an electrochemical device (100) such as an energy storage/discharge device, e.g., a lithium-ion battery, or an electrochromic device, e.g., a smart window. Hydrothermal techniques and vacuum filtration methods were applied to fabricate the electrode (110). The electrode (110) includes an active portion (140) that is made up of electrochemically active nanoparticles, with one embodiment utilizing 3d-transition metal oxides to provide the electrochemical capacity of the electrode (110). The active material (140) may include other electrochemical materials, such as silicon, tin, lithium manganese oxide, and lithium iron phosphate. The electrode (110) also includes a matrix or net (170) of electrically conductive nanomaterial that acts to connect and/or bind the active nanoparticles (140) such that no binder material is required in the electrode (110), which allows more active materials (140) to be included to improve energy density and other desirable characteristics of the electrode. The matrix material (170) may take the form of carbon nanotubes, such as single-wall, double-wall, and/or multi-wall nanotubes, and be provided as about 2 to 30 percent weight of the electrode (110) with the rest being the active material (140).
Tetiana D. Panchenko
Full Text Available The article deals with the creation of methodical support for mathematical modeling of dynamic processes in elements of the systems and complexes. As mathematical models ordinary differential equations have been used. The coefficients of the equations of the models can be nonlinear functions of the process. The projection-grid method is used as the main tool. It has been described iterative method algorithms taking into account the approximate solution prior to the first iteration and proposed adaptive control computing process. The original method of estimation error in the calculation solutions as well as for a given level of error of the technique solutions purpose adaptive method for solving configuration parameters is offered. A method for setting an adaptive method for solving the settings for a given level of error is given. The proposed method can be used for distributed computing.
Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong
Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.
This paper is concerned with determining the best initial method for a transportation problem. Seven initial methods are considered and compared. One is a new method that has not been reported in the literature. Comparison is done on the basis of the number of iterations required to reach the final solution if the concerned ...
Full Text Available Background: Health systems play key roles in identifying tobacco users and providing evidence-based care to help them quit. This treatment includes different methods such as simple medical consultation, medication, and telephone counseling. To assess different quit smoking methods selected by patients in tobacco cessation centers in Iran in order to identify those that are most appropriate for the country health system. Methods: In this cross-sectional and descriptive study, a random sample of all quit centers at the country level was used to obtain a representative sample. Patients completed the self-administered questionnaire which contained 10 questions regarding the quality, cost, effect, side effects and the results of quitting methods using a 5-point Likert-type scale. Percentages, frequencies, mean, T-test, and variance analyses were computed for all study variables. Results: A total of 1063 smokers returned completed survey questionnaires. The most frequently used methods were Nicotine Replacement Therapy (NRT and combination therapy (NRT and Counseling with 228 and 163 individuals reporting these respectively. The least used methods were hypnotism (n = 8 and the quit and win (n = 17. The methods which gained the maximum scores were respectively the combined method, personal and Champix with means of 21.4, 20.4 and 18.4. The minimum scores were for e-cigarettes, hypnotism and education with means of 12.8, 11 and 10.8, respectively. There were significant differences in mean scores based on different cities and different methods. Conclusions: According to smokers′ selection the combined therapy, personal methods and Champix are the most effective methods for quit smoking and these methods could be much more considered in the country health system.
Víšek, Jan Ámos
Roč. 21, č. 11 (2005), s. 1-58 ISSN 1212-074X R&D Projects: GA ČR(CZ) GA402/03/0084 Institutional research plan: CEZ:AV0Z10750506 Keywords : robust regression * model selection * uniform consistency of M-estimators Subject RIV: BA - General Mathematics
Jiřina, Marcel; Jiřina jr., M.
Roč. 23, č. 5 (2013), s. 451-464 ISSN 1210-0552 Institutional support: RVO:67985807 Keywords : multivariate data * GMDH * linear regression * Gauss-Markov conditions * cloning * genetic selection * classification Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.412, year: 2013
The purpose of HSR&TD is to deliver human support technologies to the Exploration Systems Mission Directorate (ESMD) that will be selected for future missions. This requires identifying promising candidate technologies and advancing them in technology readiness until they are acceptable. HSR&TD must select an may of technology development projects, guide them, and either terminate or continue them, so as to maximize the resulting number of usable advanced human support technologies. This paper proposes an effective project scoring methodology to support managing the HSR&TD project portfolio. Researchers strongly disagree as to what are the best technology project selection methods, or even if there are any proven ones. Technology development is risky and outstanding achievements are rare and unpredictable. There is no simple formula for success. Organizations that are satisfied with their project selection approach typically use a mix of financial, strategic, and scoring methods in an open, established, explicit, formal process. This approach helps to build consensus and develop management insight. It encourages better project proposals by clarifying the desired project attributes. We propose a project scoring technique based on a method previously used in a federal laboratory and supported by recent research. Projects are ranked by their perceived relevance, risk, and return - a new 3 R's. Relevance is the degree to which the project objective supports the HSR&TD goal of developing usable advanced human support technologies. Risk is the estimated probability that the project will achieve its specific objective. Return is the reduction in mission life cycle cost obtained if the project is successful. If the project objective technology performs a new function with no current cost, its return is the estimated cash value of performing the new function. The proposed project selection scoring method includes definitions of the criteria, a project evaluation
Barkeshli, Kasra; Volakis, John L.
The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.
Henao, Ricardo; Winther, Ole
We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...... with active set parameters that directly control its complexity. We also provide both theoretical and empirical support for our active set selection strategy being a good approximation of a full Gaussian process classifier. Our extensive experiments show that our approach can compete with state...
Chaharsooghi, S. K.; Ashrafi, Mehdi
Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain...
Najafi, Amir Abbas; Pourahmadi, Zahra
Selecting the optimal combination of assets in a portfolio is one of the most important decisions in investment management. As investment is a long term concept, looking into a portfolio optimization problem just in a single period may cause loss of some opportunities that could be exploited in a long term view. Hence, it is tried to extend the problem from single to multi-period model. We include trading costs and uncertain conditions to this model which made it more realistic and complex. Hence, we propose an efficient heuristic method to tackle this problem. The efficiency of the method is examined and compared with the results of the rolling single-period optimization and the buy and hold method which shows the superiority of the proposed method.
Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.
Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979
Full Text Available Legionella is a gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture and PCR test methods and to determine if sputum is an alternative to the use of more invasive bronchoalveolar lavage (BAL. Data for this study included specimens tested for Legionella at PHOL from January 1, 2010 to April 30, 2014, as part of routine clinical testing. We found sensitivity of UAT compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV 63.8% and negative predictive value (NPV 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7% and NPV 98.1%. Of 146 patients who had a Legionella positive result by PCR, only 66(45.2% also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%; sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results despite testing methods (Fisher Exact p-values=1.0, for each test. In summary, all test methods have inherent weaknesses in identifying Legionella; thereforemore than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection, and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical, from patients being tested for Legionella.
Garcia, S.; Perez, R. M.
A study on the comparison and evaluation of a miniaturized extraction method for the determination of selected PACs in sample filters is presented. The main objective was the optimization and development of simple, rapid and low cost methods, minimizing the use of extracting solvent volume. The work also includes a study on the intermediate precision. (Author)
Pauzi Ismail, Mohamad
Construction of modern structures requires good quality concrete with adequate strength and durability. Several accidents occurred in the civil constructions and were reported in the media. Such accidents were due to poor workmanship and lack of systematic monitoring during the constructions. In addition, water leaking and cracking in residential houses was commonly reported too. Based on these facts, monitoring the quality of concrete in structures is becoming more and more important subject. This paper describes major Non-destructive Testing (NDT) methods for evaluating structural integrity of concrete building. Some interesting findings during actual NDT inspections on site are presented. The NDT methods used are explained, compared and discussed. The suitable methods are suggested as minimum NDT methods to cover parameters required in the inspection.
Pinson, Paul A.
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated in barrier material, preferably in the form of a flexible sheet, one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention.
Radhakrishna, K.; Bowles, K.; Zettek-Sumner, A.
Summary Background Telehealth data overload through high alert generation is a significant barrier to sustained adoption of telehealth for managing HF patients. Objective To explore the factors contributing to frequent telehealth alerts including false alerts for Medicare heart failure (HF) patients admitted to a home health agency. Materials and Methods A mixed methods design that combined quantitative correlation analysis of patient characteristic data with number of telehealth alerts and qualitative analysis of telehealth and visiting nurses’ notes on follow-up actions to patients’ telehealth alerts was employed. All the quantitative and qualitative data was collected through retrospective review of electronic records of the home heath agency. Results Subjects in the study had a mean age of 83 (SD = 7.6); 56% were female. Patient co-morbidities (ppatient characteristics along with establishing patient-centered telehealth outcome goals may allow meaningful generation of telehealth alerts. Reducing avoidable telehealth alerts could vastly improve the efficiency and sustainability of telehealth programs for HF management. PMID:24454576
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated barrier material, preferably in the form of a flexible sheet, and one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention. 3 figs
Full Text Available With an increasing number of manufacturing services, the means by which to select and compose these manufacturing services have become a challenging problem. It can be regarded as a multiobjective optimization problem that involves a variety of conflicting quality of service (QoS attributes. In this study, a multiobjective optimization model of manufacturing service composition is presented that is based on QoS and an environmental index. Next, the skyline operator is applied to reduce the solution space. And then a new method called improved Flower Pollination Algorithm (FPA is proposed for solving the problem of manufacturing service selection and composition. The improved FPA enhances the performance of basic FPA by combining the latter with crossover and mutation operators of the Differential Evolution (DE algorithm. Finally, a case study is conducted to compare the proposed method with other evolutionary algorithms, including the Genetic Algorithm, DE, basic FPA, and extended FPA. The experimental results reveal that the proposed method performs best at solving the problem of manufacturing service selection and composition.
Valls-Cantenys, Carme; Scheurer, Marco; Iglesias, Mònica; Sacher, Frank; Brauch, Heinz-Jürgen; Salvadó, Victoria
A sensitive, multi-residue method using solid-phase extraction followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed to determine a representative group of 35 analytes, including corrosion inhibitors, pesticides and pharmaceuticals such as analgesic and anti-inflammatory drugs, five iodinated contrast media, β-blockers and some of their metabolites and transformation products in water samples. Few other methods are capable of determining such a broad range of contrast media together with other analytes. We studied the parameters affecting the extraction of the target analytes, including sorbent selection and extraction conditions, their chromatographic separation (mobile phase composition and column) and detection conditions using two ionisation sources: electrospray ionisation (ESI) and atmospheric pressure chemical ionisation (APCI). In order to correct matrix effects, a total of 20 surrogate/internal standards were used. ESI was found to have better sensitivity than APCI. Recoveries ranging from 79 to 134 % for tap water and 66 to 144 % for surface water were obtained. Intra-day precision, calculated as relative standard deviation, was below 34 % for tap water and below 21 % for surface water, groundwater and effluent wastewater. Method quantification limits (MQL) were in the low ng L(-1) range, except for the contrast agents iomeprol, amidotrizoic acid and iohexol (22, 25.5 and 17.9 ng L(-1), respectively). Finally, the method was applied to the analysis of 56 real water samples as part of the validation procedure. All of the compounds were detected in at least some of the water samples analysed. Graphical Abstract Multi-residue method for the determination of micropollutants including pharmaceuticals, iodinated contrast media and pesticides in waters by LC-MS/MS.
Sloth Møller, Ditte; Knap, Marianne Marquard; Nyeng, Tine Bisballe; Khalil, Azza Ahmed; Holt, Marianne Ingerslev; Kandi, Maria; Hoffmann, Lone
Minimizing the planning target volume (PTV) while ensuring sufficient target coverage during the entire respiratory cycle is essential for free-breathing radiotherapy of lung cancer. Different methods are used to incorporate the respiratory motion into the PTV. Fifteen patients were analyzed. Respiration can be included in the target delineation process creating a respiratory GTV, denoted iGTV. Alternatively, the respiratory amplitude (A) can be measured based on the 4D-CT and A can be incorporated in the margin expansion. The GTV expanded by A yielded GTV + resp, which was compared to iGTV in terms of overlap. Three methods for PTV generation were compared. PTV del (delineated iGTV expanded to CTV plus PTV margin), PTV σ (GTV expanded to CTV and A was included as a random uncertainty in the CTV to PTV margin) and PTV ∑ (GTV expanded to CTV, succeeded by CTV linear expansion by A to CTV + resp, which was finally expanded to PTV ∑ ). Deformation of tumor and lymph nodes during respiration resulted in volume changes between the respiratory phases. The overlap between iGTV and GTV + resp showed that on average 7% of iGTV was outside the GTV + resp implying that GTV + resp did not capture the tumor during the full deformable respiration cycle. A comparison of the PTV volumes showed that PTV σ was smallest and PTV Σ largest for all patients. PTV σ was in mean 14% (31 cm 3 ) smaller than PTV del , while PTV del was 7% (20 cm 3 ) smaller than PTV Σ . PTV σ yields the smallest volumes but does not ensure coverage of tumor during the full respiratory motion due to tumor deformation. Incorporating the respiratory motion in the delineation (PTV del ) takes into account the entire respiratory cycle including deformation, but at the cost, however, of larger treatment volumes. PTV Σ should not be used, since it incorporates the disadvantages of both PTV del and PTV σ .
Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.
Full Text Available Abstract Growth rate is the main breeding goal of fish breeders, but individual selection has often shown poor responses in fish species. The PROSPER method was developed to overcome possible factors that may contribute to this low success, using (1 a variable base population and high number of breeders (Ne > 100, (2 selection within groups with low non-genetic effects and (3 repeated growth challenges. Using calculations, we show that individual selection within groups, with appropriate management of maternal effects, can be superior to mass selection as soon as the maternal effect ratio exceeds 0.15, when heritability is 0.25. Practically, brown trout were selected on length at the age of one year with the PROSPER method. The genetic gain was evaluated against an unselected control line. After four generations, the mean response per generation in length at one year was 6.2% of the control mean, while the mean correlated response in weight was 21.5% of the control mean per generation. At the 4th generation, selected fish also appeared to be leaner than control fish when compared at the same size, and the response on weight was maximal (≈130% of the control mean between 386 and 470 days post fertilisation. This high response is promising, however, the key points of the method have to be investigated in more detail.
The book showcases a selection of peer-reviewed papers, the preliminary versions of which were presented at a conference held 11-13 June 2011 in Bologna and organized jointly by the Italian Statistical Society (SIS), the National Institute of Statistics (ISTAT) and the Bank of Italy. The theme of the conference was "Statistics in the 150 years of the Unification of Italy." The celebration of the anniversary of Italian unification provided the opportunity to examine and discuss the methodological aspects and applications from a historical perspective and both from a national and international point of view. The critical discussion on the issues of the past has made it possible to focus on recent advances, considering the studies of socio-economic and demographic changes in European countries.
Sinaga, T. S.; Siregar, K.
Generally, companies faced problem to identify vendors that can provide excellent service in availability raw material and on time delivery. The performance of suppliers in a company have to be monitored to ensure the availability to fulfill the company needs. This research is intended to explain how to assess suppliers to improve manufacturing performance. The criteria that considered in evaluating suppliers is criteria of Dickson. There are four main criteria which further split into seven sub-criteria, namely compliance with accuracy, consistency, on-time delivery, right quantity order, flexibility and negotiation, timely of order confirmation, and responsiveness. This research uses PROMETHEE methodology in assessing the supplier performances and obtaining a selected supplier as the best one that shown from the degree of alternative comparison preference between suppliers.
Full Text Available This paper deals with the application of acoustic emission (AE, which is a part of the non-destructive methods, currently having an extensive application. This method is used for measuring the internal defects of materials. AE has a high potential in further research and development to extend the application of this method even in the field of process engineering. For that matter, it is the most elaborate acoustic emission monitoring in laboratory conditions with regard to external stimuli. The aim of the project is to apply the acoustic emission recording the activity of bees in different seasons. The mission is to apply a new perspective on the behavior of colonies by means of acoustic emission, which collects a sound propagation in the material. Vibration is one of the integral part of communication in the community. Sensing colonies with the support of this method is used for understanding of colonies biological behavior to stimuli clutches, colony development etc. Simulating conditions supported by acoustic emission monitoring system the illustrate colonies activity. Collected information will be used to represent a comprehensive view of the life cycle and behavior of honey bees (Apis mellifera. Use of information about the activities of bees gives a comprehensive perspective on using of acoustic emission in the field of biological research.
... and ease of use to screen biosurfactant producing six extremely halophilic bacteria isolated from saline soil of Chott El Hodna-M'sila (Algeria), which is considered as a thalassohaline environment. Results from screening methods revealed that, CH2 and CH5 strains are potential candidates for biosurfactant production.
Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.
Full Text Available Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection method to improve structure of the normal equation, because condition number can indicate the ill condition of the normal equation. The experimental results show that the new method can improve positioning accuracy and reliability in precise relative positioning.
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.
Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.
This patent describes a method for forming a contact plug on a surface of a semiconductor body. It comprises: forming a dielectric layer over the surface of the semiconductor body, the dielectric layer having an aperture therethrough with sidewalls comprising silicon nitride; depositing a metal into the aperture in such a manner that the metal deposits upon the silicon nitride of the sidewalls of the aperture at a substantially greater rate than upon the surface of the dielectric layer
Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.
Full Text Available Monitoring water, sanitation, and hygiene (WaSH is important to track progress, improve accountability, and demonstrate impacts of efforts to improve conditions and services, especially in low- and middle-income countries. Indicator selection methods enable robust monitoring of WaSH projects and conditions. However, selection methods are not always used and there are no commonly-used methods for selecting WaSH indicators. To address this gap, we conducted a systematic review of indicator selection methods used in WaSH-related fields. We present a summary of indicator selection methods for environment, international development, and water. We identified six methodological stages for selecting indicators for WaSH: define the purpose and scope; select a conceptual framework; search for candidate indicators; determine selection criteria; score indicators against criteria; and select a final suite of indicators. This summary of indicator selection methods provides a foundation for the critical assessment of existing methods. It can be used to inform future efforts to construct indicator sets in WaSH and related fields.
Zhu, Xiaofeng; Suk, Heung-Il; Wang, Li; Lee, Seong-Whan; Shen, Dinggang
In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task learning framework. Specifically, the relational information includes three kinds of relationships (such as feature-feature relation, response-response relation, and sample-sample relation), for preserving three kinds of the similarity, such as for the features, the response variables, and the samples, respectively. To conduct feature selection, we first formulate the objective function by imposing these three relational characteristics along with an ℓ 2,1 -norm regularization term, and further propose a computationally efficient algorithm to optimize the proposed objective function. With the dimension-reduced data, we train two support vector regression models to predict the clinical scores of ADAS-Cog and MMSE, respectively, and also a support vector classification model to determine the clinical label. We conducted extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to validate the effectiveness of the proposed method. Our experimental results showed the efficacy of the proposed method in enhancing the performances of both clinical scores prediction and disease status identification, compared to the state-of-the-art methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Full Text Available The control method, one of the most reliable methods of selection forest management, has been applied in selection forests of western Serbia in a somewhat modified form (Goč variety for fifty years. This paper analyzes the effects of the control method, i.e. its Goč variety, in the period from 1960/70 - 2000. It is based on the data of five successive complete inventories of the Forest Management Unit (FMU 'Tara', whose high selection forest of spruce, fir and beech (Piceo-Abieti-Fagetum subass. typicum trees on diluvium, brown and illimerised soil on limestone, and on limestone in formation with hornfels, are the best quality and the most spacious forests in the Management Class MC 491/1. The effects were monitored through the changes in the distribution of the number of trees and volume per diameter classes, separately for fir as the protagonist of the selection structure, and collectively at the level of a compartment, a typical representative of MC 491/1. Also, the analysis included the changes in the number of trees, volume, current volume increment, yield, and number of recruited trees per unit area (1 ha by tree species in MC 491/1, occupying an area of 2,648.78 ha. The study results show that in the study period the average volume in MC 491/1 increased by 18.8%, the percentage of conifers increased from 66.0% to 78.5%, and the bearer of the changes was fir. The volume of the mean fir tree increased by 35.9% and it attained 1.086 m3. The volume increment increased by 15.7%. The selection structure of conifers was satisfactory, but there were problems with beech regeneration, in its stable presence and in its achievement of the targeted structure. The number of trees per unit area (1 ha decreased, which in the long run could have detrimental consequences, but the sustainability in general was satisfactory. The levels of regeneration and recruitment were satisfactory. The health of the trees was improved; the stands were healthy, vital
Hano, Mitsuo; Hotta, Masashi
A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.
Young, J.R.; Kannberg, L.D.; Ramsdell, J.V.; Rickard, W.H.; Watson, D.G.
Selection of the best method for disposal of the waste heat from a large power generation center requires a comprehensive comparison of the costs and environmental effects. The objective is to identify the heat dissipation method with the minimum total economic and environmental cost. A 20 reactor HNEC will dissipate about 50,000 MWt of waste heat; a 40 reactor HNEC would release about 100,000 MWt. This is a much larger discharge of heat than has occurred from other concentrated industrial facilities and consequently a special analysis is required to determine the permissibility of such a large heat disposal and the best methods of disposal. It is possible that some methods of disposal will not be permissible because of excessive environmental effects or that the optimum disposal method may include a combination of several methods. A preliminary analysis is presented of the Hanford Nuclear Energy Center heat disposal problem to determine the best methods for disposal and any obvious limitations on the amount of heat that can be released. The analysis is based, in part, on information from an interim conceptual study, a heat sink management analysis, and a meteorological analysis
Cesar, Paulo Francisco; Della Bona, Alvaro; Scherrer, Susanne S; Tholey, Michael; van Noort, Richard; Vichi, Alessandro; Kelly, Robert; Lohbauer, Ulrich
The objective is within the scope of the Academy of Dental Materials Guidance Project, which is to provide dental materials researchers with a critical analysis of fracture toughness (FT) tests such that the assessment of the FT of dental ceramics is conducted in a reliable, repeatable and reproducible way. Fracture mechanics theory and FT methodologies were critically reviewed to introduce basic fracture principles and determine the main advantages and disadvantages of existing FT methods from the standpoint of the dental researcher. The recommended methods for FT determination of dental ceramics were the Single Edge "V" Notch Beam (SEVNB), Single Edge Precracked Beam (SEPB), Chevron Notch Beam (CNB), and Surface Crack in Flexure (SCF). SEVNB's main advantage is the ease of producing the notch via a cutting disk, SEPB allows for production of an atomically sharp crack generated by a specific precracking device, CNB is technically difficult, but based on solid fracture mechanics solutions, and SCF involves fracture from a clinically sized precrack. The IF test should be avoided due to heavy criticism that has arisen in the engineering field regarding the empirical nature of the calculations used for FT determination. Dental researchers interested in FT measurement of dental ceramics should start with a broad review of fracture mechanics theory to understand the underlying principles involved in fast fracture of ceramics. The choice of FT methodology should be based on the pros and cons of each test, as described in this literature review. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Cook, Joseph S., Jr.
This invention comprises a torque drive mechanism utilizing axially translatable, mutually engageable transmission members having mating crown gears, driven and driving members with a three-element drive train being biased together by resilient means or by a fluid actuator system, the apparatus being operable to transmit a precisely controlled degree of torque to a driven member. The apparatus is applicable for use in hand tools and as a replacement for impact torque drivers, torque wrenches, motorized screw drivers, or the like, wherein the applied torque must be precisely controlled or limited. The bevel torque drive includes a drive gear which is axially displaceable and rotatable within cylindrical driver housing, a rotatable intermediate gear, and an output gear. Key rotationally secures displaceable gear with respect to input shaft but permits axial movement therebetween. A thrust bearing is preferably connected to the lower end of shaft for support to reduce play and friction between shaft and a transmission joint disc during rotation of the gear train. Coaxially mounted coiled spring is footed against displaceable gear for biasing the displaceable gear toward and into engagement with the intermediate gear for driving intermediate gear and output gear. Torque control is achieved by the use of straight or spiral beveled gears which are of configurations adapted to withdraw from mutual engagement upon the torque exceeding a predetermined limit. The novel, advantageous features of the invention include the configuration of the mating, crown gear sets and the axially translatable, slidable drive gear. The mechanism is capable of transmitting a high degree of torque within a narrow, compact transmission housing. The compact size and narrow, elongated configuration of the housing is particularly applicable for use in hand tools and in multiple torque driver mechanisms in which it is necessary to drive multiple fasteners which are located in close proximity. Prior
Full Text Available Respiratory viruses contribute to significant morbidity and mortality in healthy and immunocompromised individuals and are considered as a significant economic burden in the healthcare system. The similar clinical symptoms in the course of different viral and bacterial respiratory infections make the proper diagnosis difficult. An accurate and prompt diagnostics is crucial for infection control and patient management decisions, especially regarding the use of antibacterial or antiviral therapy and hospitalization. Moreover, the identification of the causative agent eliminates inappropriate use of antibiotics and may reduce the cost of healthcare.A wide variety of diagnostic procedures is applied for the detection of viral agents responsible for respiratory tract infections. For many years, the viral antigen detection and standard isolation technique in cell culture was the main method used in routine diagnostics. However, in recent years the nucleic acid amplification techniques have become widely used and have significantly improved the sensitivity of viral detection in clinical specimens. Molecular diagnostic assays have contributed to revealing high rates of co-infection (multiplex reactions and allow identification of agents that are difficult to culture.This paper discusses a number of technical aspects of the current most commonly used techniques, their general principles, main benefits and diagnostic value, but also some of their limitations.
Brillowska-Dabrowska, A.; Wianecka, M.; Dabrowski, Slawomir
A DNA fingerprinting method known as ALIS-FLP (amplified ligation selected fragment-length polymorphism) has been developed for selective and specific amplification of restriction fragments from TspRI restriction endonuclease digested genomic DNA. The method is similar to AFLP, but differs...
Aziz, Romana; Aziz, R.; van Hillegersberg, Jos; Kersten, W.; Blecker, T.; Luthje, C.
Selection of suppliers and allocation of optimum volumes to suppliers is a strategic business decision. This paper presents a decision support method for supplier selection and the optimal allocation of volumes in a supplier portfolio. The requirements for the method were gathered during a case
van der Steen, LPE; Hage, JJ; Kon, M; Monstrey, SJ
In 1999, the European Association of Plastic Surgeons accepted a structured method to assess and select the abstracts that are submitted for its yearly scientific meeting. The two criteria used to evaluate whether such a selection method is accurate were reliability and validity. The authors
Heidaritabar, M.; Vereijken, A.; Muir, W.M.; Meuwissen, T.H.E.; Cheng, H.; Megens, H.J.W.C.; Groenen, M.; Bastiaansen, J.W.M.
Genomic selection (GS) is a DNA-based method of selecting for quantitative traits in animal and plant breeding, and offers a potentially superior alternative to traditional breeding methods that rely on pedigree and phenotype information. Using a 60¿K SNP chip with markers spaced throughout the
Outdoor performance of photovoltaic modules and systems depends on prevailing conditions at the time of measurement. Outdoor test conditions must be relevant to device performance and readily attainable. Flat-plate, nonconcentrator PV device performance is reported with respect to fixed conditions referred to as Standard Reporting Conditions (SRC) of 1 kW/m plane of array total irradiance, 25 C device temperature, and a reference spectral distribution at air mass 1.5 under certain atmospheric conditions. We report a method of analyzing historical meteorological and irradiance data to determine the range of outdoor environmental parameters and solar irradiance components that affect solar collector performance when the SRC 1 kW/m total irradiance value occurs outdoors. We used data from the 30 year U.S. National Solar Radiation Data Base (NSRDB) , restricting irradiance conditions to within +/- 25 W/m of 1 kW/m on a solar tracking flat-plate collector. The distributions of environmental parameter values under these conditions are non-Gaussian and site dependent. Therefore the median, as opposed to the mean, of the observed distributions is chosen to represent appropriate outdoor reporting conditions. We found the average medians for the direct beam component (834 W/m), ambient temperature (24.4 C), total column water vapor (1.4 cm), and air mass (1.43) are near commonly used SRC values. Average median wind speed (4.4 m/s) and broadband aerosol optical depth (0.08) were significantly different from commonly used values
Aagaard, Jan E.; Vacquier, Victor D.; MacCoss, Michael J.; Swanson, Willie J.
Identifying fertilization molecules is key to our understanding of reproductive biology, yet only a few examples of interacting sperm and egg proteins are known. One of the best characterized comes from the invertebrate archeogastropod abalone (Haliotis spp.), where sperm lysin mediates passage through the protective egg vitelline envelope (VE) by binding to the VE protein vitelline envelope receptor for lysin (VERL). Rapid adaptive divergence of abalone lysin and VERL are an example of positive selection on interacting fertilization proteins contributing to reproductive isolation. Previously, we characterized a subset of the abalone VE proteins that share a structural feature, the zona pellucida (ZP) domain, which is common to VERL and the egg envelopes of vertebrates. Here, we use additional expressed sequence tag sequencing and shotgun proteomics to characterize this family of proteins in the abalone egg VE. We expand 3-fold the number of known ZP domain proteins present within the VE (now 30 in total) and identify a paralog of VERL (vitelline envelope zona pellucida domain protein [VEZP] 14) that contains a putative lysin-binding motif. We find that, like VERL, the divergence of VEZP14 among abalone species is driven by positive selection on the lysin-binding motif alone and that these paralogous egg VE proteins bind a similar set of sperm proteins including a rapidly evolving 18-kDa paralog of lysin, which may mediate sperm–egg fusion. This work identifies an egg coat paralog of VERL under positive selection and the candidate sperm proteins with which it may interact during abalone fertilization. PMID:19767347
Swiderska, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Slodkowska, Janina
The paper presents a combined method for an automatic hot-spot areas selection based on penalty factor in the whole slide images to support the pathomorphological diagnostic procedure. The studied slides represent the meningiomas and oligodendrogliomas tumor on the basis of the Ki-67/MIB-1 immunohistochemical reaction. It allows determining the tumor proliferation index as well as gives an indication to the medical treatment and prognosis. The combined method based on mathematical morphology, thresholding, texture analysis and classification is proposed and verified. The presented algorithm includes building a specimen map, elimination of hemorrhages from them, two methods for detection of hot-spot fields with respect to an introduced penalty factor. Furthermore, we propose localization concordance measure to evaluation localization of hot spot selection by the algorithms in respect to the expert's results. Thus, the results of the influence of the penalty factor are presented and discussed. It was found that the best results are obtained for 0.2 value of them. They confirm effectiveness of applied approach.
Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.
The volume includes the presentations given on the International Meeting 'Selected Topics on Nuclear Methods for Non-nuclear Applications'. The meeting was organized by the Project CECOA. The Project 'CEnter for COoerative Activities' (CECOA) of the Institute for Nuclear Research and Nuclear Energy (INRNE) of Bulgarian Academy of Sciences is part of the Program 'Creating of Infrastructure' of Bulgarian Ministry of Science and Education. The CECOA-project unifies the groups of INRNE doing research in the field of nuclear methods. Four Laboratories of INRNE are members of CECOA-project: Moessbauer Spectroscopy and Low Radioactivity Measurements, High-Resolution Gamma-Spectroscopy, Neutron Methods in Condensed Matter, Neutron Optics and Structure Analysis. Taking into account the leading role of education on nuclear physics the Project includes program devoted to the training on nuclear physics. The presented volume contains 23 contributed papers. The contributions are separated in 6 sections. The section 'Nano technology' includes 5 papers. The activity in this field within the Project reveals the collaboration with other Institutes of Bulgarian Academy of Sciences as well as large international contacts. The section 'Radioecology and Radioactive Waste' is two fold. Part of the contributions of the section manifests the connection of the CECOA with small enterprises. The contacts are on the level of common projects concerning the investigations, remediation and release of radioactively contaminated terrain, soils, water, buildings and materials around the former uranium processing industry. Another part of the section is devoted to the application of nuclear methods to the treatment of radioactive waste produced by nuclear power stations. The section 'Neutron Physics' reveals the activity within the Project connected with the study of new materials using polarized neutrons and neutron diffraction methods. The section 'Nuclear Physics' is an introduction to some
Pires, Sara Monteiro
on the public health question being addressed, on the data requirements, on advantages and limitations of the method, and on the data availability of the country or region in question. Previous articles have described available methods for source attribution, but have focused only on foodborne microbiological...
Ratri Enggar Pawening
Full Text Available Datasets with heterogeneous features can affect feature selection results that are not appropriate because it is difficult to evaluate heterogeneous features concurrently. Feature transformation (FT is another way to handle heterogeneous features subset selection. The results of transformation from non-numerical into numerical features may produce redundancy to the original numerical features. In this paper, we propose a method to select feature subset based on mutual information (MI for classifying heterogeneous features. We use unsupervised feature transformation (UFT methods and joint mutual information maximation (JMIM methods. UFT methods is used to transform non-numerical features into numerical features. JMIM methods is used to select feature subset with a consideration of the class label. The transformed and the original features are combined entirely, then determine features subset by using JMIM methods, and classify them using support vector machine (SVM algorithm. The classification accuracy are measured for any number of selected feature subset and compared between UFT-JMIM methods and Dummy-JMIM methods. The average classification accuracy for all experiments in this study that can be achieved by UFT-JMIM methods is about 84.47% and Dummy-JMIM methods is about 84.24%. This result shows that UFT-JMIM methods can minimize information loss between transformed and original features, and select feature subset to avoid redundant and irrelevant features.
Full Text Available To select cytogenetically normal embryos, preimplantation genetic diagnosis (PGD aneuploidy screening (AS is used in numerous centers around the world. Chromosomal abnormalities lead to developmental problems, implantation failure, and early abortion of embryos. The usefulness of PGD in identifying single-gene diseases, human leukocyte antigen typing, X-linked diseases, and specific genetic diseases is well-known. In this review, preimplantation embryo genetics, PGD research studies, and the European Society of Human Reproduction and Embryology PGD Consortium studies and reports are examined. In addition, criteria for embryo selection, technical aspects of PGD-AS, and potential noninvasive embryo selection methods are described. Indications for PGD and possible causes of discordant PGD results between the centers are discussed. The limitations of fluorescence in situ hybridization, and the advantages of the array comparative genomic hybridization are included in this review. Although PGD-AS for patients of advanced maternal age has been shown to improve in vitro fertilization outcomes in some studies, to our knowledge, there is not sufficient evidence to use advanced maternal age as the sole indication for PGD-AS. PGD-AS might be harmful and may not increase the success rates of in vitro fertilization. At the same time PGD, is not recommended for recurrent implantation failure and unexplained recurrent pregnancy loss.
Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik
Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.
Chen, Ye; Khashab, Niveen M.; Tao, Jing
Composition comprising at least one graphene material and at least one metal. The metal can be in the form of nanoparticles as well as microflakes, including single crystal microflakes. The metal can be intercalated in the graphene sheets
王宏起; 唐宇; 迟运领
A successful and effective strategic alliance involves many factors, of which selecting a proper partner is the most important factor to achieve the success of the alliance. In view of the characteristics of strategic alliance in high and new technology enterprises and according to the analysis on the standards of partner selecting and the factors of the success of alliance, this paper does some deeper research on the partner selecting and the alliance evaluation process from the perspective of different strategic levels by using a fuzzy comprehensive evaluating method, thus providing a method to select the alliance partner for high and new technology enterprises in China.
Full Text Available Recently, various types of sensors have been developed. Using these sensors, IoT systems have become hot topics in embedded system domain. However, sensor selections for embedded systems are not well discussed up to now. This paper focuses on embedded system’s features and architecture, and proposes a sensor selection method which is composed seven steps. In addition, we applied the proposed method to a simple example – a sensor selection for computer scored answer sheet reader unit. From this case study, an idea to use FTA in sensor selection is also discussed.
Garde, A H; Hansen, Åse Marie; Kristiansen, J
The aims of this study were to elucidate to what extent storage and repeated freezing and thawing influenced the concentration of creatinine in urine samples and to evaluate the method for determination of creatinine in urine. The creatinine method was based on the well-known Jaffe's reaction...... and measured on a COBAS Mira autoanalyser from Roche. The main findings were that samples for analysis of creatinine should be kept at a temperature of -20 degrees C or lower and frozen and thawed only once. The limit of detection, determined as 3 x SD of 20 determinations of a sample at a low concentration (6...
He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li
Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.
Sarma, M.S.R.; Wagadarikar, V.K.
Personnel selected to work in a nuclear electric generating station rarely have the necessary knowledge and experience in all the related fields. A station can be operated and maintained and at the same time radiation doses absorbed by station personnel can be kept to a minimum only if the operating personnel are familiar with, and can be used for, all phases of station operation and the maintainers have more than one skill or trade. More technical knowledge and more diversified skills, in addition to those required in other industries, are needed because of the nature of the nuclear reactor and the associated radiation environment and high automation. A training programme has been developed at the Nuclear Training Centre (NTC) near the Rajasthan Atomic Power Station (RAPS), Kota, India, to cater to the needs of the operation and maintenance personnel for nuclear power stations including the Madras Atomic Power Station. This programme has been in operation for the last five years. The paper describes the method of recruitment/selection of various categories of personnel and the method of training them to meet the job requirements. (author)
Guillemin, Ernst A
An eminent electrical engineer and authority on linear system theory presents this advanced treatise, which approaches the subject from the viewpoint of classical dynamics and covers Fourier methods. This volume will assist upper-level undergraduates and graduate students in moving from introductory courses toward an understanding of advanced network synthesis. 1963 edition.
Composition comprising at least one graphene material and at least one metal. The metal can be in the form of nanoparticles as well as microflakes, including single crystal microflakes. The metal can be intercalated in the graphene sheets. The composition has high conductivity and flexibility. The composition can be made by a one-pot synthesis in which a graphene material precursor is converted to the graphene material, and the metal precursor is converted to the metal. A reducing solvent or dispersant such as NMP can be used. Devices made from the composition include a pressure sensor which has high sensitivity. Two two- dimension materials can be combined to form a hybrid material.
Sloth Møller, Ditte; Knap, Marianne Marquard; Nyeng, Tine Bisballe
: PTVσ yields the smallest volumes but does not ensure coverage of tumor during the full respiratory motion due to tumor deformation. Incorporating the respiratory motion in the delineation (PTVdel) takes into account the entire respiratory cycle including deformation, but at the cost, however, of larger...
Full Text Available Becoming a member of ASEAN Economic Community (AEC, Thailand expects a growth of cross-border trade with neighboring countries, especially the agricultural products shipment. To facilitate this, a number of strategies are set, such as the utilization of single check point, the Asian Highway (AH route development, and the truck lane initiation. However, majority of agricultural products traded through the borders are transported using the rural roads, from growing area to the factory, before continuing to the borders using different highways. It is, therefore, necessary for the Department of Rural Roads (DRR to plan for rural road improvement to accommodate the growth of the cross-border trades in the near future. This research, thus, aims to select potential rural roads to support cross-border shipment utilizing the analytic hierarchy process (AHP method. Seven key factors affecting rural roads selection, with references from transport and other related literatures, are extracted. They include:1 cross-border trade value, 2 distance from border to rural road, 3 agriculture and processed agriculture goods transported across the border, 4 compatibility with national strategies, 5 area characteristics around the rural road, 6 truck volume, and 7 number of rural roads in the radius of 50 kilometers from the border. Interviews are conducted with the experts based on seven key factors to collect data for the AHP analysis. The results identify the weight of each factor with an acceptable consistency ratio. It shows that the cross-border trade value is the most important factor as it achieves the highest weight. The distance from border to rural road and the compatibility with national strategies are also found crucial when making rural road selection decision. The Department of Rural Roads could use the results to select suitable roads, and plan for road improvement to support the crossborder shipment when the AEC is fully implemented.
P. B. Lanjewar
Full Text Available The evaluation and selection of energy technologies involve a large number of attributes whose selection and weighting is decided in accordance with the social, environmental, technical and economic framework. In the present work an integrated multiple attribute decision making methodology is developed by combining graph theory and analytic hierarchy process methods to deal with the evaluation and selection of energy technologies. The energy technology selection attributes digraph enables a quick visual appraisal of the energy technology selection attributes and their interrelationships. The preference index provides a total objective score for comparison of energy technologies alternatives. Application of matrix permanent offers a better appreciation of the considered attributes and helps to analyze the different alternatives from combinatorial viewpoint. The AHP is used to assign relative weights to the attributes. Four examples of evaluation and selection of energy technologies are considered in order to demonstrate and validate the proposed method.
Fischbach, Jens; Xander, Nina Carolin; Frohme, Marcus; Glökler, Jörn Felix
The need for simple and effective assays for detecting nucleic acids by isothermal amplification reactions has led to a great variety of end point and real-time monitoring methods. Here we tested direct and indirect methods to visualize the amplification of potato spindle tuber viroid (PSTVd) by loop-mediated isothermal amplification (LAMP) and compared features important for one-pot in-field applications. We compared the performance of magnesium pyrophosphate, hydroxynaphthol blue (HNB), calcein, SYBR Green I, EvaGreen, and berberine. All assays could be used to distinguish between positive and negative samples in visible or UV light. Precipitation of magnesium-pyrophosphate resulted in a turbid reaction solution. The use of HNB resulted in a color change from violet to blue, whereas calcein induced a change from orange to yellow-green. We also investigated berberine as a nucleic acid-specific dye that emits a fluorescence signal under UV light after a positive LAMP reaction. It has a comparable sensitivity to SYBR Green I and EvaGreen. Based on our results, an optimal detection method can be chosen easily for isothermal real-time or end point screening applications.
Beal, D. [BA-PIRC, Cocoa, FL (United States); McIlvaine, J. [BA-PIRC, Cocoa, FL (United States); Fonorow, K. [BA-PIRC, Cocoa, FL (United States); Martin, E. [BA-PIRC, Cocoa, FL (United States)
This document illustrates guidelines for the efficient installation of interior duct systems in new housing, including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces.
Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.
Müller, L; Müller, E; Hildebrandt, C; Kapelari, K; Raschner, C
The biological maturity status plays an important role in sports, since it influences the performance level and the talent selection in various types of sport. More mature athletes are favorably selected for regional and national squads. Therefore, the biological maturity status should be considered during the talent selection process. In this context, the relative age effect (RAE), which exists when the relative age quarter distribution of selected sports groups shows a biased distribution with an over-representation of athletes born in the first months after the specific cut-off-date for the competition categories, represents another problem in the talent development. From an ethical point of view, discrimination of young talented kids does exist: the relatively younger athletes have little to no chance of reaching the elite level, despite their talents and efforts. The causal mechanisms behind the RAE are still unclear and have to be assessed. In this context, the biological maturation seems to be a possible influential factor for the existence of a RAE in sport, which has to be examined. Several methods for estimating the biological maturity status exist; however, they are often expensive and not practicable. Consequently, the aim of the present study was to assess the concordance of a simple, yet accurate method of estimating biological maturation (prediction equation of age at peak height velocity, APHV) of Mirwald and co-workers, and the gold standard method of estimating skeletal age (SA, the x-ray of the left wrist). In total, 75 Austrian students (40♂, 35♀) aged 10 - 13 years, were examined. Thirty of the participants (17♂, 13♀) were students of a well-known Austrian ski boarding school, and 45 (23♂, 22♀) of a non-sportive secondary modern school of the same region. The participants included in the study had not experienced a rupture of the carpal bones of the left wrist. Parents and participants were informed of the study aims
Full Text Available A simulation model for 3D polydisperse bubble column flows in an Eulerian/Eulerian framework is presented. A computationally efficient and numerically stable algorithm is created by making use of quadrature method of moments (QMOM functionalities, in conjunction with appropriate breakup and coalescence models. To account for size dependent bubble motion, the constituent moments of the bubble size distribution function are transported with individual velocities. Validation of the simulation results against experimental and numerical data of Hansen  show the capability of the present model to accurately predict complex gas-liquid flows.
Semenov, V. L.; Kadyshev, E. N.; Zakharova, A. N.; Patianova, A. O.; Dulina, G. S.
The article discusses the use of qualimetry methods using the apparatus of mathematical analysis in the formation of the integral index that allows one to select the best option among competing information technology. The authors propose the use of affine space in the evaluation and selection of competing information technologies.
Philip M. McDonald
Young-growth, mixed-conifer stands were logged by a group-selection method designed to create openings 30, 60, and 90 feet in diameter. Total costs for felling, limbing, bucking, and skidding on these openings ranged from $7.04 to $7.99 per thousand board feet. Cost differences between openings were not statistically significant. Logging costs for group selection...
Herrmann, Bent; Frandsen, Rikke; Holst, René
In this paper, the paired-gear and covered cod-end methods for estimating the selectivity of trawl cod-ends are compared. A modified version of the cod-end selectivity simulator PRESEMO is used to simulate the data that would be collected from a paired-gear experiment where the test cod-end also ...
Pandov, E.; Popandreeva, A.
The principal methods of psychological tests for the selection of nuclear power plant operators are presented. The mobility of the psychic processes, the stability and the shift of attention, the short-term memory and the speed of the sensory-motor reactions are evaluated by adopted testing procedures to assess the functional status of the applicants. A set of 11 tests, divided into 4 groups according to the qualities under evaluation is described. The tests include various reactions to light and sound stimulus and a repetitive numerical test in limited time. The differentiating bimodal response is considered as the most conclusive for the assessment of the sensory-motor response of importance in the nuclear reactor operators work. 4 refs. (R.Ts.)
Al-Fandi, Mohamed; Alshraiedeh, Nida; Owies, Rami; Alshdaifat, Hala; Al-Mahaseneh, Omamah; Al-Tall, Khadijah; Alawneh, Rawan
This paper reports a novel self-detection method for tumor cells using living nano-robots. These living robots are a nonpathogenic strain of E. coli bacteria equipped with naturally synthesized bio-nano-sensory systems that have an affinity to VEGF, an angiogenic factor overly-expressed by cancer cells. The VEGF-affinity/chemotaxis was assessed using several assays including the capillary chemotaxis assay, chemotaxis assay on soft agar, and chemotaxis assay on solid agar. In addition, a microfluidic device was developed to possibly discover tumor cells through the overexpressed vascular endothelial growth factor (VEGF). Various experiments to study the sensing characteristic of the nano-robots presented a strong response toward the VEGF. Thus, a new paradigm of selective targeting therapies for cancer can be advanced using swimming E. coli as self-navigator miniaturized robots as well as drug-delivery vehicles.
Zijp, M C; Waaijers-van der Loop, S L; Heijungs, R; Broeren, M L M; Peeters, R; Van Nieuwenhuijzen, A; Shen, L; Heugens, E H W; Posthuma, L
Sustainability assessments provide scientific support in decision procedures towards sustainable solutions. However, in order to contribute in identifying and choosing sustainable solutions, the sustainability assessment has to fit the decision context. Two complicating factors exist. First, different stakeholders tend to have different views on what a sustainability assessment should encompass. Second, a plethora of sustainability assessment methods exist, due to the multi-dimensional characteristic of the concept. Different methods provide other representations of sustainability. Based on a literature review, we present a protocol to facilitate method selection together with stakeholders. The protocol guides the exploration of i) the decision context, ii) the different views of stakeholders and iii) the selection of pertinent assessment methods. In addition, we present an online tool for method selection. This tool identifies assessment methods that meet the specifications obtained with the protocol, and currently contains characteristics of 30 sustainability assessment methods. The utility of the protocol and the tool are tested in a case study on the recovery of resources from domestic waste water. In several iterations, a combination of methods was selected, followed by execution of the selected sustainability assessment methods. The assessment results can be used in the first phase of the decision procedure that leads to a strategic choice for sustainable resource recovery from waste water in the Netherlands. Copyright © 2017 Elsevier Ltd. All rights reserved.
A great deal of thought has been given in recent years to the documentation of individual patients and their diseases, especially since the computerization of registry sytems facilitates the storage and retrieval of large amounts of data, but the documentation of radiation treatment methods has received surprisingly little attention. The guidelines which follow are intended for use both internally (within radiotherapy centres) and externally when a treatment method is reported in the literature or transferred from one centre to another. The amount of detail reported externally will, of course, depend on the circumstances: for example, a published paper will usually mention only the most important of the radiation and physical parameters, but it is important for the department of origin to list all parameters in a separate document, available on request. These guidelines apply specifically to the documentation of treatment by external radiation beams, although many of the suggestions would also apply to treatment by small sealed sources (brachytherapy) and by unsealed radionuclides. Treatment techniques which involve a combination of external and internal sources (e.g. Ca. cervix uteri treatd by intracavitary sources plus external beam therapy) require particularly careful documentation to indicate the relationship bwtween dose distribution (in both space and time) achieved by the two modalities
Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro
Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.
home range maintenance or attraction to or avoidance of landscape features, including roads (Morales et al. 2004, McClintock et al. 2012). For example...radiotelemetry and extensive road survey data are used to generate the first density estimates available for the species. The results show that southern...secretive snakes that combines behavioral observations of snake road crossing speed, systematic road survey data, and simulations of spatial
Griffioen, J.; Passier, H.F.; Klein, J.
Establishment of natural background levels (NBL) for groundwater is commonly performed to serve as reference when assessing the contamination status of groundwater units. We compare various selection methods to establish NBLs using groundwater quality data forfour hydrogeologically different areas
Said, Tamer M.; Land, Jolande A.
BACKGROUND: Current routine semen preparation techniques do not inclusively target all intrinsic sperm characteristics that may impact the fertilization potential. In order to address these characteristics, several methods have been recently developed and applied to sperm selection. The objective of
Roussel, Sophie; Felix, Benjamin; Vingadassalon, Noémie; Grout, Joël; Hennekinne, Jacques-Antoine; Guillier, Laurent; Brisabois, Anne; Auvray, Fréderic
Staphylococcal food poisoning outbreaks (SFPOs) are frequently reported in France. However, most of them remain unconfirmed, highlighting a need for a better characterization of isolated strains. Here we analyzed the genetic diversity of 112 Staphylococcus aureus strains isolated from 76 distinct SFPOs that occurred in France over the last 30 years. We used a recently developed multiple-locus variable-number tandem-repeat analysis (MLVA) protocol and compared this method with pulsed field gel electrophoresis (PFGE), spa-typing and carriage of genes (se genes) coding for 11 staphylococcal enterotoxins (i.e., SEA, SEB, SEC, SED, SEE, SEG, SEH, SEI, SEJ, SEP, SER). The strains known to have an epidemiological association with one another had identical MLVA types, PFGE profiles, spa-types or se gene carriage. MLVA, PFGE and spa-typing divided 103 epidemiologically unrelated strains into 84, 80, and 50 types respectively demonstrating the high genetic diversity of S. aureus strains involved in SFPOs. Each MLVA type shared by more than one strain corresponded to a single spa-type except for one MLVA type represented by four strains that showed two different-but closely related-spa-types. The 87 enterotoxigenic strains were distributed across 68 distinct MLVA types that correlated all with se gene carriage except for four MLVA types. The most frequent se gene detected was sea, followed by seg and sei and the most frequently associated se genes were sea-seh and sea-sed-sej-ser. The discriminatory ability of MLVA was similar to that of PFGE and higher than that of spa-typing. This MLVA protocol was found to be compatible with high throughput analysis, and was also faster and less labor-intensive than PFGE. MLVA holds promise as a suitable method for investigating SFPOs and tracking the source of contamination in food processing facilities in real time. PMID:26441849
a computerized test designed to assess pilot skills by measuring spatial orientation and psychomotor skills and multitasking . The second is the...AFRL-SA-WP-SR-2015-0028 Objective Versus Subjective Military Pilot Selection Methods in the United States of America Joe...September 2014 4. TITLE AND SUBTITLE Objective Versus Subjective Military Pilot Selection Methods in the United States of America 5a. CONTRACT
Full Text Available The selection of seismic attributes is a key process in reservoir prediction because the prediction accuracy relies on the reliability and credibility of the seismic attributes. However, effective selection method for useful seismic attributes is still a challenge. This paper presents a novel selection method of seismic attributes for reservoir prediction based on the gray relational degree (GRD and support vector machine (SVM. The proposed method has a two-hierarchical structure. In the first hierarchy, the primary selection of seismic attributes is achieved by calculating the GRD between seismic attributes and reservoir parameters, and the GRD between the seismic attributes. The principle of the primary selection is that these seismic attributes with higher GRD to the reservoir parameters will have smaller GRD between themselves as compared to those with lower GRD to the reservoir parameters. Then the SVM is employed in the second hierarchy to perform an interactive error verification using training samples for the purpose of determining the final seismic attributes. A real-world case study was conducted to evaluate the proposed GRD-SVM method. Reliable seismic attributes were selected to predict the coalbed methane (CBM content in southern Qinshui basin, China. In the analysis, the instantaneous amplitude, instantaneous bandwidth, instantaneous frequency, and minimum negative curvature were selected, and the predicted CBM content was fundamentally consistent with the measured CBM content. This real-world case study demonstrates that the proposed method is able to effectively select seismic attributes, and improve the prediction accuracy. Thus, the proposed GRD-SVM method can be used for the selection of seismic attributes in practice.
Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)
Our laboratory maintains standards for high doses in India. The glutamine powder dosimeter (spectrophotometric readout) is used for this purpose. Present studies show that 20 mg of unirradiated/irradiated glutamine dissolved in freshly prepared 10 ml of aerated aqueous acidic FX solution containing 2 x 10 -3 mol dm -3 ferrous ammonium sulphate and 10 -4 mol dm -3 xylenol orange in 0.033 mol dm -3 sulphuric acid is suitable for the dosimetry in the dose range of 0.1-100 kGy. Normally no corrections are required for the post-irradiation fading of the irradiated glutamine. The response of glutamine dosimeter is independent of irradiation temperature in the range of about 23-30 deg. C and at other temperatures, a correction is necessary. The dose intercomparison results for photon, electron and bremsstrahlung radiations show that glutamine can be used as a reference standard dosimeter. The use of flat polyethylene bags containing glutamine powder has proved very successful for electron dosimetry of wide energies. Several other amino acids like alanine, valine and threonine can also be used to cover wide range of doses using spectrophotometric readout method. (author)
Smith, D.R.; Luna, R.E.; Taylor, J.M.
Two studies were completed which evaluate the environmental impact of radioactive material transport. The first was a generic study which evaluated all radioactive materials and all transportation modes; the second addressed spent fuel and fuel-cycle wastes shipped by truck, rail and barge. A portion of each of those studies dealing with the change in impact resulting from alternative shipping methods is presented in this paper. Alternatives evaluated in each study were mode shifts, operational constraints, and, in generic case, changes in material properties and package capabilities. Data for the analyses were obtained from a shipper survey and from projections of shipments that would occur in an equilibrium fuel cycle supporting one hundred 1000-MW(e) reactors. Population exposures were deduced from point source radiation formulae using separation distances derived for scenarios appropriate to each shipping mode and to each exposed population group. Fourteen alternatives were investigated for the generic impact case. All showed relatively minor changes in the overall radiological impact. Since the radioactive material transport is estimated to be fewer than 3 latent cancer fatalities (LCF) for each shipment year (compared to some 300,000 yearly cancer fatalities or 5000 LCF's calculated for background radiation using the same radiological effects model), a 15% decrease caused by shifting from passenger air to cargo air is a relatively small effect. Eleven alternatives were considered for the fuel cycle/special train study, but only one produced a reduction in total special train baseline LCF's (.047) that was larger than 5%
Kim, Yun Goo; Seong, Poong Hyun
The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.
Dobie, Robert A; Wojcik, Nancy C
The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to
Carl, D.E.; Leonard, I.M.
As part of the West Valley Demonstration Project (WVDP), th PUREX supernatant stored in Tank 8d-2 will be partially decontaminated before encapsulation in the final glass form. This report discusses selection of a method for removing Cs-137, the major radioactive ion in the supernatant. Methods considered were: (1) electrodialysis; (2) hyperfiltration; (3) precipitation with ferrocyanide, NaTPB, or PTA; (4) organic ion exchange using Cs-100 or a biologically derived media; (5) chelation using DeVoe/Holbein compostions; and (6) inorganic ion exchange using Durasil, natural zeolities, IE-95 or IE-96 media. Several different methods of using inorganic ion exchange media were also reviewed including (1) four columns with elution, and (2) two, three, or four columns without elution. After the careful evaluation of experimental data with all process constraints taken into account, the inorganic exchange media IE-96 (Linde Ionsiv IE-96 synthetic zeolite) was chosen for WVDP cesium recovery. IE-96 was chosen for the following reasons: high sorption rate, a decontamination factor (DF) over 1000, excellent exchange capacity at WVDP conditions, compatability with the glass formers used for borosilicate glass in direct melter feed applications, and a history of successful application in radio chemical seperation for waste streams. 34 refs., 29 figs., 27 tabs
Full Text Available With the rapid depletion of natural resources and undesired environmental changes globally, more interest has been shown in the research of green supply chain practices, including end-of-life vehicle (ELV recycling. The ELV recycling is mandatory for auto-manufacturers by legislation for the purpose of minimizing potential environmental damages. The purpose of the present research is to determine the best choice of ELV recycling service provider by employing an integrating hybrid multi-criteria decision making (MCDM method. In this research, economic, environmental and social factors are taken into consideration. The linguistic variables and trapezoidal fuzzy numbers (TFNs are applied into this evaluation to deal with the vague and qualitative information. With the combined weight calculation of criteria based on fuzzy aggregation and Shannon Entropy techniques, the normative multi-criteria optimization technique (FVIKOR method is applied to explore the best solution. An application was performed based on the proposed hybrid MCDM method, and sensitivity analysis was conducted on different decision making scenarios. The present study provides a decision-making approach on ELV recycling business selection under sustainability and green philosophy with high robustness and easy implementation.
Guardiola, Carlos; Climent, Héctor; Pla, Benjamín; Reig, Alberto
Highlights: • Optimal Control is applied for heat release shaping in internal combustion engines. • Optimal Control allows to assess the engine performance with a realistic reference. • The proposed method gives a target heat release law to define control strategies. - Abstract: The present paper studies the optimal heat release law in a Diesel engine to maximise the indicated efficiency subject to different constraints, namely: maximum cylinder pressure, maximum cylinder pressure derivative, and NO_x emission restrictions. With this objective, a simple but also representative model of the combustion process has been implemented. The model consists of a 0D energy balance model aimed to provide the pressure and temperature evolutions in the high pressure loop of the engine thermodynamic cycle from the gas conditions at the intake valve closing and the heat release law. The gas pressure and temperature evolutions allow to compute the engine efficiency and NO_x emissions. The comparison between model and experimental results shows that despite the model simplicity, it is able to reproduce the engine efficiency and NO_x emissions. After the model identification and validation, the optimal control problem is posed and solved by means of Dynamic Programming (DP). Also, if only pressure constraints are considered, the paper proposes a solution that reduces the computation cost of the DP strategy in two orders of magnitude for the case being analysed. The solution provides a target heat release law to define injection strategies but also a more realistic maximum efficiency boundary than the ideal thermodynamic cycles usually employed to estimate the maximum engine efficiency.
Full Text Available At present, in the fault diagnosis database of submarine optical fiber network, the attribute selection of large data is completed by detecting the attributes of the data, the accuracy of large data attribute selection cannot be guaranteed. In this paper, a large data attribute selection method based on support vector machines (SVM for fault diagnosis database of submarine optical fiber network is proposed. Mining large data in the database of optical fiber network fault diagnosis, and calculate its attribute weight, attribute classification is completed according to attribute weight, so as to complete attribute selection of large data. Experimental results prove that ,the proposed method can improve the accuracy of large data attribute selection in fault diagnosis database of submarine optical fiber network, and has high use value.
Na, Man Gyun; Sim, Young Rok
It is well known that the performance of a fuzzy neural networks strongly depends on the input features selected for its training. In its applications to sensor signal estimation, there are a large number of input variables related with an output. As the number of input variables increases, the training time of fuzzy neural networks required increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural networks and to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this work, principal component analysis (PAC), genetic algorithms (GA) and probability theory are combined to select new important input features. A proposed feature selection method is applied to the signal estimation of the steam generator water level, the hot-leg flowrate, the pressurizer water level and the pressurizer pressure sensors in pressurized water reactors and compared with other input feature selection methods
Tiago Antonio de Souza
Full Text Available Plant pathogenic bacteria utilize an array of effector proteins to cause disease. Among them, transcriptional activator-like (TAL effectors are unusual in the sense that they modulate transcription in the host. Although target genes and DNA specificity of TAL effectors have been elucidated, how TAL proteins control host transcription is poorly understood. Previously, we showed that the Xanthomonas citri TAL effectors, PthAs 2 and 3, preferentially targeted a citrus protein complex associated with transcription control and DNA repair. To extend our knowledge on the mode of action of PthAs, we have identified new protein targets of the PthA4 variant, required to elicit canker on citrus. Here we show that all the PthA4-interacting proteins are DNA and/or RNA-binding factors implicated in chromatin remodeling and repair, gene regulation and mRNA stabilization/modification. The majority of these proteins, including a structural maintenance of chromosomes protein (CsSMC, a translin-associated factor X (CsTRAX, a VirE2-interacting protein (CsVIP2, a high mobility group (CsHMG and two poly(A-binding proteins (CsPABP1 and 2, interacted with each other, suggesting that they assemble into a multiprotein complex. CsHMG was shown to bind DNA and to interact with the invariable leucine-rich repeat region of PthAs. Surprisingly, both CsHMG and PthA4 interacted with PABP1 and 2 and showed selective binding to poly(U RNA, a property that is novel among HMGs and TAL effectors. Given that homologs of CsHMG, CsPABP1, CsPABP2, CsSMC and CsTRAX in other organisms assemble into protein complexes to regulate mRNA stability and translation, we suggest a novel role of TAL effectors in mRNA processing and translational control.
Melendy, L.; Hagen, S. C.; Sullivan, F. B.; Pearson, T. R. H.; Walker, S. M.; Ellis, P.; Kustiyo; Sambodo, Ari Katmoko; Roswintiarti, O.; Hanson, M. A.; Klassen, A. W.; Palace, M. W.; Braswell, B. H.; Delgado, G. M.
Selective logging has an impact on the global carbon cycle, as well as on the forest micro-climate, and longer-term changes in erosion, soil and nutrient cycling, and fire susceptibility. Our ability to quantify these impacts is dependent on methods and tools that accurately identify the extent and features of logging activity. LiDAR-based measurements of these features offers significant promise. Here, we present a set of algorithms for automated detection and mapping of critical features associated with logging - roads/decks, skid trails, and gaps - using commercial airborne LiDAR data as input. The automated algorithm was applied to commercial LiDAR data collected over two logging concessions in Kalimantan, Indonesia in 2014. The algorithm results were compared to measurements of the logging features collected in the field soon after logging was complete. The automated algorithm-mapped road/deck and skid trail features match closely with features measured in the field, with agreement levels ranging from 69% to 99% when adjusting for GPS location error. The algorithm performed most poorly with gaps, which, by their nature, are variable due to the unpredictable impact of tree fall versus the linear and regular features directly created by mechanical means. Overall, the automated algorithm performs well and offers significant promise as a generalizable tool useful to efficiently and accurately capture the effects of selective logging, including the potential to distinguish reduced impact logging from conventional logging.
Jacques V. Hugo
Full Text Available The new generation of nuclear power plants (NPPs will likely make use of state-of-the-art technologies in many areas of the plant. The analysis, design, and selection of advanced human–system interfaces (HSIs constitute an important part of power plant engineering. Designers need to consider the new capabilities afforded by these technologies in the context of current regulations and new operational concepts, which is why they need a more rigorous method by which to plan the introduction of advanced HSIs in NPP work areas. Much of current human factors research stops at the user interface and fails to provide a definitive process for integration of end user devices with instrumentation and control and operational concepts. The current lack of a clear definition of HSI technology, including the process for integration, makes characterization and implementation of new and advanced HSIs difficult. This paper describes how new design concepts in the nuclear industry can be analyzed and how HSI technologies associated with new industrial processes might be considered. It also describes a basis for an understanding of human as well as technology characteristics that could be incorporated into a prioritization scheme for technology selection and deployment plans.
Kim, Young Sik
A transportation of SNF may cause an additional radiation exposure to human beings. It means that the radiological risk should be estimated and managed quantitatively for the public who live near the shipments route. Before the SNF transportation is performed, the route selection is concluded based on the radiological risk estimated with RADTRAN code in existing method generally. It means the existing method for route selection is based only on the radiological health risk but there are not only the impacts related to the radiological health risk but also the socioeconomic impacts related to the cost. In this study, a new method and its numerical formula for route selection on transporting SNF is proposed based on cost estimation because there are several costs in transporting SNF. The total cost consists of radiological health cost, transportation cost, and socioeconomic cost. Each cost is defined properly to the characteristics of SNF transportation and many coefficients and variables describing the meaning of each cost are obtained or estimated through many surveys. Especially to get the socioeconomic cost, contingent valuation method (CVM) is used with a questionnaire. The socioeconomic cost estimation is the most important part of the total cost originated from transporting SNF because it is a very dominant cost in the total cost. The route selection regarding SNF transportation can be supported with the proposed method reasonably and unnecessary or exhausting controversies about the shipments could be avoided
Hines, J. W.; Garvey, D. R.
Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)
Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali
The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.
A filmless X-ray imaging system includes at least one X-ray source, upper and lower collimators, and a solid-state detector array, and can provide three-dimensional imaging capability. The X-ray source plane is distance z.sub.1 above upper collimator plane, distance z.sub.2 above the lower collimator plane, and distance z.sub.3 above the plane of the detector array. The object to be X-rayed is located between the upper and lower collimator planes. The upper and lower collimators and the detector array are moved horizontally with scanning velocities v.sub.1, v.sub.2, v.sub.3 proportional to z.sub.1, z.sub.2 and z.sub.3, respectively. The pattern and size of openings in the collimators, and between detector positions is proportional such that similar triangles are always defined relative to the location of the X-ray source. X-rays that pass through openings in the upper collimator will always pass through corresponding and similar openings in the lower collimator, and thence to a corresponding detector in the underlying detector array. Substantially 100% of the X-rays irradiating the object (and neither absorbed nor scattered) pass through the lower collimator openings and are detected, which promotes enhanced sensitivity. A computer system coordinates repositioning of the collimators and detector array, and X-ray source locations. The computer system can store detector array output, and can associate a known X-ray source location with detector array output data, to provide three-dimensional imaging. Detector output may be viewed instantly, stored digitally, and/or transmitted electronically for image viewing at a remote site.
Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach
. The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR  as well as PLSR with Jack-knifing  was applied to data in order to achieve variable selection prior...
Tohda, K; Dragoe, D; Shibata, M; Umezawa, Y
A theory is presented that describes the matched potential method (MPM) for the determination of the potentiometric selectivity coefficients (KA,Bpot) of ion-selective electrodes for two ions with any charge. This MPM theory is based on electrical diffuse layers on both the membrane and the aqueous side of the interface, and is therefore independent of the Nicolsky-Eisenman equation. Instead, the Poisson equation is used and a Boltzmann distribution is assumed with respect to all charged species, including primary, interfering and background electrolyte ions located at the diffuse double layers. In this model, the MPM-selectivity coefficients of ions with equal charge (ZA = ZB) are expressed as the ratio of the concentrations of the primary and interfering ions in aqueous solutions at which the same amounts of the primary and interfering ions permselectively extracted into the membrane surface. For ions with unequal charge (ZA not equal to ZB), the selectivity coefficients are expressed as a function not only of the amounts of the primary and interfering ions permeated into the membrane surface, but also of the primary ion concentration in the initial reference solution and the delta EMF value. Using the measured complexation stability constants and single ion distribution coefficients for the relevant systems, the corresponding MPM selectivity coefficients can be calculated from the developed MPM theory. It was found that this MPM theory is capable of accurately and precisely predicting the MPM selectivity coefficients for a series of ion-selective electrodes (ISEs) with representative ionophore systems, which are generally in complete agreement with independently determined MPM selectivity values from the potentiometric measurements. These results also conclude that the assumption for the Boltzmann distribution was in fact valid in the theory. The recent critical papers on MPM have pointed out that because the MPM selectivity coefficients are highly concentration
Mohwinkel, Dennis; Kleint, Charlotte; Koschinsky, Andrea
Highlights: • Phase associations of metals in marine Fe–Mn nodules and crusts were determined. • Selective leaching experiments with siderophore desferrioxamine B were conducted. • Siderophores selectively mobilize high-tech metals associated with Fe carrier phases. • Base metal liberation including Fe and Mn is limited. • Siderophores have promising potential for application in ore processing industries. - Abstract: Deep-sea ferromanganese deposits contain a wide range of economically important metals. Ferromanganese crusts and nodules represent an important future resource, since they not only contain base metals such as Mn, Ni, Co, Cu and Zn, but are also enriched in critical or rare high-technology elements such as Li, Mo, Nb, W, the rare earth elements and yttrium (REY). These metals could be extracted from nodules and crusts as a by-product to the base metal production. However, there are no proper separation techniques available that selectively extract certain metals out of the carrier phases. By sequential leaching, we demonstrated that, except for Li, which is present in an easily soluble form, all other high-tech metals enriched in ferromanganese nodules and crusts are largely associated with the Fe-oxyhydroxide phases and only to subordinate extents with Mn-oxide phases. Based on this fact, we conducted selective leaching experiments with the Fe-specific organic ligand desferrioxamine-B, a naturally occurring and ubiquitous siderophore. We showed by leaching of ferromanganese nodules and crusts with desferrioxamine-B that a significant and selective extraction of high-tech metals such as Li, Mo, Zr, Hf and Ta is possible, while other elements like Fe and the base metals Mn, Ni, Cu, Co and Zn are not extracted to large extents. The set of selectively extracted elements can be extended to Nb and W if Mn and carbonate phases are stripped from the bulk nodule or crust prior to the siderophore leach by e.g. a sequential leaching technique. This
Chakraborty, Shankar; Chatterjee, Prasenjit; Prasad, Kanika
Selection of the most appropriate cotton fibre type for yarn manufacturing is often treated as a multi-criteria decision-making (MCDM) problem as the optimal selection decision needs to be taken in presence of several conflicting fibre properties. In this paper, two popular MCDM methods in the form of decision making trial and evaluation laboratory (DEMATEL) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR) are integrated to aid the cotton fibre selection decision. DEMATEL method addresses the interrelationships between various physical properties of cotton fibres while segregating them into cause and effect groups, whereas, VIKOR method helps in ranking all the considered 17 cotton fibres from the best to the worst. The derived ranking of cotton fibre alternatives closely matches with that obtained by the past researchers. This model can assist the spinning industry personnel in the blending process while making accurate fibre selection decision when cotton fibre properties are numerous and interrelated.
Osga, Glenn A.
The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.
Full Text Available A recommended method has been developed and published by CORESTA, applicable to the quantification of selected volatiles (1,3-butadiene, isoprene, acrylonitrile, benzene, and toluene in the gas phase of cigarette mainstream smoke. The method involved smoke collection in impinger traps and detection and measurement using gas chromatography/mass spectrometry techniques.
Grabow, W O; De Villiers, J C; Schildhauer, C I
In a comparison of five selected methods for the enumeration of fecal coliforms and Escherichia coli in naturally contaminated and sewage-seeded mussels (Choromytilus spp.) and oysters (Ostrea spp.), a spread-plate procedure with mFC agar without rosolic acid and preincubation proved the method of choice for routine quality assessment.
Heidaritabar, M; Vereijken, A; Muir, W M; Meuwissen, T; Cheng, H; Megens, H-J; Groenen, M A M; Bastiaansen, J W M
Genomic selection (GS) is a DNA-based method of selecting for quantitative traits in animal and plant breeding, and offers a potentially superior alternative to traditional breeding methods that rely on pedigree and phenotype information. Using a 60 K SNP chip with markers spaced throughout the entire chicken genome, we compared the impact of GS and traditional BLUP (best linear unbiased prediction) selection methods applied side-by-side in three different lines of egg-laying chickens. Differences were demonstrated between methods, both at the level and genomic distribution of allele frequency changes. In all three lines, the average allele frequency changes were larger with GS, 0.056 0.064 and 0.066, compared with BLUP, 0.044, 0.045 and 0.036 for lines B1, B2 and W1, respectively. With BLUP, 35 selected regions (empirical P selected regions were identified. Empirical thresholds for local allele frequency changes were determined from gene dropping, and differed considerably between GS (0.167-0.198) and BLUP (0.105-0.126). Between lines, the genomic regions with large changes in allele frequencies showed limited overlap. Our results show that GS applies selection pressure much more locally than BLUP, resulting in larger allele frequency changes. With these results, novel insights into the nature of selection on quantitative traits have been gained and important questions regarding the long-term impact of GS are raised. The rapid changes to a part of the genetic architecture, while another part may not be selected, at least in the short term, require careful consideration, especially when selection occurs before phenotypes are observed.
Garcia Alonso, S.; Perez Pastor, R. M.
A study on the comparison and evaluation of 3 miniaturized extraction methods for the determination of selected PACs in clear waters is presented. Three types of liquid-liquid extraction were used for chromatographic analysis by HPLC with fluorescence detection. The main objective was the optimization and development of simple, rapid and low cost methods, minimizing the use of extracting solvent volume. The work also includes a study on the scope of the methods developed at low and high levels of concentration and intermediate precision. (Author)
Full Text Available Nowadays, the battlefield environment has become much more complex and variable. This paper presents a quantitative method and lower bound for the amount of target information acquired from multiple radar observations to adaptively and dynamically organize the detection of battlefield resources based on the principle of information entropy. Furthermore, for minimizing the given information entropy’s lower bound for target measurement at every moment, a method to dynamically and adaptively select radars with a high amount of information for target tracking is proposed. The simulation results indicate that the proposed method has higher tracking accuracy than that of tracking without adaptive radar selection based on entropy.
Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.
MacLellan, Ellen; Shore, David I; Milliken, Bruce
Selective attention is generally studied with conflict tasks, using response time as the dependent measure. Here, we study the impact of selective attention to a first target, T1, presented simultaneously with a distractor, on the accuracy of subsequent encoding of a second target item, T2. This procedure produces an "attentional blink" (AB) effect much like that reported in other studies, and allowed us to study the influence of context on cognitive control with a novel method. In particular, we examined whether preparation to attend selectively to T1 had an impact on the selective encoding of T1 that would translate to report of T2. Preparation to attend selectively was manipulated by varying whether difficult selective attention T1 trials were presented in the context of other difficult selective attention T1 trials. The results revealed strong context effects of this nature, with smaller AB effects when difficult selective attention T1 trials were embedded in a context with many, rather than few, other difficult selective attention T1 trials. Further, the results suggest that both the trial-to-trial local context and the block-wide global context modulate performance in this task.
the athletes improved their points around 20%, while no improvement was observed for the non-athletes. Non-athletes scored the worst points for the pink balls and during the second trial a minor decrease in their points was observed. Table tennis athletes demonstrated the highest improvement in points in the second trial for the pink balls. The findings of the selective action array developed for this study showed that in the first phase, and especially in sports played using rackets, the development of the sustenance of attention and visual perception could be attained rapidly. Thus, by making changes in the number and color of the balls as well as differences in actions to be taken, this method could be a new approach to be used for other sports to include the cognitive attributes in selection process of the athletes.
Lin, Hui-Chen; Lin, Chi-Yi; Chien, Tsui-Wei; Liu, Kuei-Fen; Chen, Miao-Yen; Lin, Wen-Chuan
A constellation of factors accounts for teaching efficacy in the fundamental nursing practicum. Teachers play a critical role in terms of designing and executing an appropriate teaching plan, choosing effective methods, and holding appropriate teaching attitudes. It is thus extremely important that clinical teachers master the core characteristics of basic nursing practice. This study aimed to illuminate the core characteristics of basic nursing practice for students for reference by clinical practicum teachers. Qualitative research was used to identify the fundamentals of nursing practice by clinical teacher. Five focus group meetings were convened during the practice period. The researchers presided over group discussions held during the normal weekly teaching schedule and lasting approximately 2-4 hours each. The content analysis was adopted to analyze the data. Three major themes were proposed, including (1) student status: "novices were stymied by problems and thus improved slowly"; (2) teacher awareness: "teachers need to be aware of student capabilities, mood, and discomfort"; and (3) teaching style: "a good choice of methods should support and encourage students. To cultivate professional nursing knowledge and self-confidence for future professional commitment, clinical teachers must first understand the characteristics and motivations of learning of their students and then select the, skills, and attitudes appropriate to provide step-by-step guidance. Communication with staffs and the preparation of atmosphere prior to nursing practice are also essential for students. Results provide insights into the technical college environment with regard to basic-level clinical nursing practice.
Fisk, W.J.; Faulkner, D.; Sullivan, D. [and others
To test proposed methods for reducing SBS symptoms and to learn about the causes of these symptoms, a double-blind controlled intervention study was designed and implemented. This study utilized two different interventions designed to reduce occupants` exposures to airborne particles: (1) high efficiency filters in the building`s HVAC systems; and (2) thorough cleaning of carpeted floors and fabric-covered chairs with an unusually powerful vacuum cleaner. The study population was the workers on the second and fourth floors of a large office building with mechanical ventilation, air conditioning, and sealed windows. Interventions were implemented on one floor while the occupants on the other floor served as a control group. For the enhanced-filtration intervention, a multiple crossover design was used (a crossover is a repeat of the experiment with the former experimental group as the control group and vice versa). Demographic and health symptom data were collected via an initial questionnaire on the first study week and health symptom data were obtained each week, for eight additional weeks, via weekly questionnaires. A large number of indoor environmental parameters were measured during the study including air temperatures and humidities, carbon dioxide concentrations, particle concentrations, concentrations of several airborne bioaerosols, and concentrations of several microbiologic compounds within the dust sampled from floors and chairs. This report describes the study methods and summarizes the results of selected environmental measurements.
Crnogorac Miroslav P.
Full Text Available In the world today for the exploitation of oil reservoirs by artificial lift methods are applied different types of deep pumps (piston, centrifugal, screw, hydraulic, water jet pumps and gas lift (continuous, intermittent and plunger. Maximum values of oil production achieved by these exploitation methods are significantly different. In order to select the optimal exploitation method of oil well, the multicriteria analysis models are used. In this paper is presented an analysis of the multicriteria model's application known as VIKOR, TOPSIS, ELECTRE, AHP and PROMETHEE for selection of optimal exploitation method for typical oil well at Serbian exploration area. Ranking results of applicability of the deep piston pumps, hydraulic pumps, screw pumps, gas lift method and electric submersible centrifugal pumps, indicated that in the all above multicriteria models except in PROMETHEE, the optimal method of exploitation are deep piston pumps and gas lift.
Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan
The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.
Cheng Yingsheng; Yang Renjie; Li Minghua; Chen Weixiong; Shang Kezhong; Zhuang Qixin; Xu Jianrong; Chen Niwei; Zhu Yude
Objective: To study method selection and evaluation of midtrimester and long-term therapeutic efficiency of achalasia with three methods of interventional procedure. Method: 50 cases achalasia with 30 cases performing with balloon dilation (group A) and 5 cases with permanent metallic internal stent dilation (group B) and 15 cases with temporary metallic internal stent dilation (group C) under fluoroscopy. Results: 30 cases of group A had 56 times of dilations (mean 1.9 times). The mean diameter of cardia was (2.4 +- 1.2) mm before dilation and (9.7 +- 3.0) mm after dilation. The mean dysphagia scores were 2.4 +- 1.2 grades before dilation and 1.0 +- 0.3 grades after dilation. Complications in 30 cases included chest pain (n = 9), reflux (n = 8) and bleeding (n = 3). 18(60%) of 30 cases showed dysphagia relapse during follow-up over 6 months, 18(90%) of 20 cases showed dysphagia relapse during follow-up over 12 months. 5 uncovered expandable metal stents were permanently placed in 5 cases of group B. The mean diameter of cardia was (3.2 +- 2.0) mm before dilation and (18.4 +- 1.7) mm after dilation. The mean dysphagia scores were (2.4 +- 1.1) grade before dilation and (0.4 +- 0.2) grade after dilation. Complications in 5 cases included chest pain (n = 3), reflux (n = 4), bleeding (n = 1) and hyperplasia of granulation tissue (n 2). 3(60%) in 5 cases showed dysphagia relapse during follow-up over 6 months, 1(50%) in 2 cases were dysphagia relapse during follow-up over 12 months. 15 covered expandable metal stents were temporarily placed in 15 cases of group C and drawn out at the 3-7 days via gastroscopy. The mean diameter of cardia was (3.4 +- 2.9) mm before dilation and (14.7 +- 2.9) mm after dilation. The mean dysphagia scores were (2.5 +- 1.1) grades before dilation and (0.6 +- 0.3) grades after dilation. Complications in 15 cases included chest pain (n = 3), reflux (n = 3) and bleeding (n = 2). 3(20%) in 15 cases showed dysphagia relapse during follow-up over 6
Dohrmann, R.; Kaufhold, S.
competition to barium ions in smectite interlayers. The present study aims at proposing a selection strategy for CEC methods including exchangeable cation determination of different clay materials typically used in radioactive waste disposal studies. 1) If the clay/bentonite contains calcium carbonates but no gypsum several methods are available, all of them are based on a saturation of the exchange solution with calcite before the exchange experiment starts: AgTU-calcite, CoHex-calcite, and Cu-trien5xcalcite. If dolomite is present the initial calcite saturation minimizes dolomite dissolution as well (proved for AgTU-calcite). 2) If a bentonite contains carbonates and gypsum the only known successful method for determination of all exchangeable cations (including calcium) and the CEC is a combination of two separate results: i) calcite saturation of exchange solution (e.g. Cu-trien5xcalcite) and ii) quantification of gypsum with suitable mineralogical methods. Result i) is free of error caused by calcite dissolution; however it is still wrong because it contains significant amounts of Ca 2+ from gypsum dissolution. After proving that gypsum was completely dissolved during the exchange experiment result ii) was used to subtract the theoretical Ca 2+ portion of gypsum from result i). Gypsum saturation of exchange solutions was tested as not suitable owing to the high solubility of gypsum which results in high initial Ca 2+ concentrations. 3) If no such soluble minerals are present and if the pore water contains no excess electrolyte then usually any accepted CEC method can be used, however, the results should always be checked for plausibility (e.g. using mineralogical composition of the sample). Any accepted method means that methods with known systematic errors (hydrophobic interaction, dependency on layer charge density) are excluded. (authors)
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.
Lotz Meredith J
Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA
Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C
Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity
Full Text Available This paper proposes the application of the Analytic Hierarchy Process method to support decision making regarding the selection of an Enterprise Resource Planning system in a manufacturing company. The main assumption of the work is that the management of the selection of an ERP system should consider that the most important selection criteria are concerned with the functionality of the ERP system. Besides this, the aspects of total cost of ownership, technical support and implementation time or vendor experience are taken into consideration to guarantee a successful ERP implementation. The proposed procedure of an ERP system selection is dedicated for small and medium manufacturing enterprises. A structure of attributes for the AHP method is proposed on the basis of an analysis and identification of critical success factors. Different kinds of production (make-to-stock, make-to-order and engineer-to-order are taken into consideration. Illustrative examples are also given.
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)
Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)
Wang Xiaojia; Mao Qirong; Zhan Yongzhao
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction
Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
Fan, Xueman; Hu, Shengliang; He, Jingbo
In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.
Coincidence method for calibration of gamma detectors using suitable calibration standards with two cascading gamma rays is analyzed. From the list of recommended gamma ray standards currently under reevaluation by the CRP, 14 radionuclides were selected as the potential source candidates for the coincidence method. The following sources were selected 24 Na, 46 Sc, 60 Co, 66 Ga, 75 Se, 88 Y, Nb 94 , 111 In, 123m Te, 133 Ba, 134 Cs, 152 Eu, 154 Eu and 207 Bi. Reaction 11 B (p,γ) 12 C* was also selected as a source of high energy gamma rays. Experimental data on angular correlation coefficients for selected sources were collected from the literature and evaluated according to the recommended procedure. Theoretical angular correlation coefficients were calculated and compared to the evaluated data. (author)
Full Text Available Competent employees are the key resource in an organization for achieving success and, therefore, competitiveness on the market. The aim of the recruitment and selection process is to acquire personnel with certain competencies required for a particular position, i.e.,a position within the company. Bearing in mind the fact that in the process of decision making decision-makers have underused the methods of making decisions, this paper aims to establish an MCDM model for the evaluation and selection of candidates in the process of the recruitment and selection of personnel based on the SWARA and the ARAS methods. Apart from providing an MCDM model, the paper will additionally provide a set of evaluation criteria for the position of a sales manager (the middle management in the telecommunication industry which will also be used in the numerical example. On the basis of a numerical example, in the process of employment, theproposed MCDMmodel can be successfully usedin selecting candidates.
Kikkenborg Pedersen, Dennis; Dukovska-Popovska, Iskra; Ola Strandhagen, Jan
This paper presents a general method for evaluating and selecting environmental certificates and labels for companies to use on products and services. The method is developed based on a case study using a Grounded Theory approach. The result is a generalized six-step method that features an initial...... searching strategy and an evaluation model that weighs the prerequisites, rewards and the organization of certificate or label against the strategic needs of a company....
Jasińska, Elżbieta; Preweda, Edward
In the paper the problem of selecting the method of choosing factors in factorial analysis is presented. For the database of 61 real estates the process of singling out the factors was carried out with the use of all the methods proposed in the STATISTICA 6.0 pack. A particular attention was paid on the number of differentiated factors and the efficiency of subsequent methods for the analysis of the real estates market. Edward Preweda
Lu Si; Jie Yu; Shasha Li; Jun Ma; Lei Luo; Qingbo Wu; Yongqi Ma; Zhengji Liu
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rul...
Full Text Available In the paper selected methods of measuring the thermal resistance of an IGBT (Insulated Gate Bipolar Transistor are presented and the accuracy of these methods is analysed. The analysis of the measurement error is performed and operating conditions of the considered device, at which each measurement method assures the least measuring error, are pointed out. Theoretical considerations are illustrated with some results of measurements and calculations.
Laskin, Julia [Richland, WA; Wang, Peng [Billerica, MA
Disclosed is a method for selective chemical binding and immobilization of macromolecules on solid supports in conjunction with self-assembled monolayer (SAM) surfaces. Immobilization involves selective binding of peptides and other macromolecules to SAM surfaces using reactive landing (RL) of mass-selected, gas phase ions. SAM surfaces provide a simple and convenient platform for tailoring chemical properties of a variety of substrates. The invention finds applications in biochemistry ranging from characterization of molecular recognition events at the amino acid level and identification of biologically active motifs in proteins, to development of novel biosensors and substrates for stimulated protein and cell adhesion.
Gallardo, A; Carlos, M; Colomer, F J; Edo-Alcón, N
There are several factors which have an influence in the selective collection of the municipal waste. To define a selective collection system, the waste generation pattern should be firstly determined and these factors should be analyzed in depth. This paper tries to analyze the economic income level and the seasonal variation on the collection and the purity of light-packaging waste to determine actions to improve the waste management plan of a town. In the first stage of the work, waste samples of the light-packaging containers were collected in two zones of the town with different economic characteristics in different seasons during one year. In the second stage, the samples were characterized to analyze the composition and purity of the waste. They were firstly separated into four fractions: metals; plastic; beverage cartons; and misplaced materials. The misplaced fraction was in its turn separated into cardboard, rubber and leather, inert waste, organic matter, paper, hazardous waste, clothes and shoes, glass and others. The plastic fraction was separated into five types of plastics and the metal fraction into three. In the third stage, the data have been analyzed and conclusions have been extracted. The main result is that the quality of the light-packaging fraction collected in these zones during both seasons were similar. This methodology can be extrapolated to towns with similar characteristics. It will be useful when implementing a system to collect the waste selectively and to develop actions to achieve a good participation in the selective collection of the waste.
Liang, Ja-Der; Ping, Xiao-Ou; Tseng, Yi-Ju; Huang, Guan-Tarn; Lai, Feipei; Yang, Pei-Ming
Recurrence of hepatocellular carcinoma (HCC) is an important issue despite effective treatments with tumor eradication. Identification of patients who are at high risk for recurrence may provide more efficacious screening and detection of tumor recurrence. The aim of this study was to develop recurrence predictive models for HCC patients who received radiofrequency ablation (RFA) treatment. From January 2007 to December 2009, 83 newly diagnosed HCC patients receiving RFA as their first treatment were enrolled. Five feature selection methods including genetic algorithm (GA), simulated annealing (SA) algorithm, random forests (RF) and hybrid methods (GA+RF and SA+RF) were utilized for selecting an important subset of features from a total of 16 clinical features. These feature selection methods were combined with support vector machine (SVM) for developing predictive models with better performance. Five-fold cross-validation was used to train and test SVM models. The developed SVM-based predictive models with hybrid feature selection methods and 5-fold cross-validation had averages of the sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the ROC curve as 67%, 86%, 82%, 69%, 90%, and 0.69, respectively. The SVM derived predictive model can provide suggestive high-risk recurrent patients, who should be closely followed up after complete RFA treatment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Lynn Henry S
Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.
Full Text Available Because of economical reasons the substitution of maize by feed corn as wheat (Sulamit and triticale (Kitaro was revolved in concentrate part of dairy cow feeding rations. The design of mentioned replacement in feeding rations was carried out according to results of previous research (Pozdíšek and Vaculová, 2008 for nutrition experiment. The aim of this paper was to evaluate the possible effects of corn replacement in cow feeding rations on milk composition and properties. The expressively different variants of corn were selected for experiment in comparison to maize (reference. Dairy cows were fed by total mixed ration on the basis of maize and clover silage and hay. Otherwise the identical day feeding rations among cow groups differed only in concentrate portions ((K, control group maize 1.5 kg, wheat (P1 2.0 kg and triticale (P2 2.0 kg (experimental groups. Group feeding rations 1 (K, 2 (P1 and 3 (P2 had: NEL/kg dry (DM matter (6.524, 6.512 and 6.491; NL % in DM (17.9, 18.2 and 17.9; fibre % in DM (15.96, 15.74 and 15.72; PDIN/PDIE (1.189, 1.189 and 1.191. The experiment took six weeks, there were included 8, 9 and 9 cows (n = 26 of Czech Fleckvieh breed. Feed groups were well balanced in terms of milk yield, days in milk and number of lactation. The tie stable and pipeline milking equipment were used in experiment. Animals were milked twice a day and sampled at morning milking in intervals about seven days approximately. Cows were relatively healthy in terms of occurrence of milk secretion disorders. Within groups the individual milk samples (in total 182 in experiment were aggregated into bulk samples (n = 21 = 3 groups × 7 sampling periods, which were analysed on 45 milk indicators, 18 of them were evaluated in this paper. The differences in milk yield were significantly advantageous for K group (15.32 > 14.07 (wheat or 13.86 kg (triticale at morning milking, while fat (3.27 < 3.47 or 3.44 % was lower (P < 0.05. Lactose was not
Mueller, David S.
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.
Shamsoddini, A.; Aboodi, M. R.; Karami, J.
Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.
Shakouri Ganjavi, H.; Nikravesh, K.
Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran
Meyers, Charles E.; Davidson, George S.; Johnson, David K.; Hendrickson, Bruce A.; Wylie, Brian N.
A method of data mining represents related items in a multidimensional space. Distance between items in the multidimensional space corresponds to the extent of relationship between the items. The user can select portions of the space to perceive. The user also can interact with and control the communication of the space, focusing attention on aspects of the space of most interest. The multidimensional spatial representation allows more ready comprehension of the structure of the relationships among the items.
Ulonska, E.; Gaul, H.; Baumer, M.; Gesellschaft fuer Strahlen- und Umweltforschung m.b.H., Gruenbach
This mutation breeding programme is investigating the qualification of micro-mutations for the selection of improved protein quality and quantity. Normally, improvement of protein content in micro-mutations is rather small. Therefore, it is important to develop methods and conditions of selection being (a) capable of measuring these small deviations in protein content and quality, and (b) simple to use. In two experiments carried out in 1971 and 1972 nitrogen fertilization was found to be the most important factor in the improvement of selection conditions. There is a highly significant negative correlation between crude protein content and the standard deviation; i.e. the higher the content of crude protein, the lower the variation coefficient. This in turn leads to an increase of genetic variation necessary for better selection progress. Nitrogen fertilization, especially during ear emergence, covers environmental influences - e.g., planting space, sowing rate, growing in different plots (6, 3, 2, 1 rows or in half-ear hills) - to a great extent. Thus, by applying high doses of nitrogen dressings comparable results can be achieved. In an overall selection experiment (testing the entire crossing and mutation material available at Weihenstephan in a stepwise selection from 1971 to 1973) and two selection experiments conducted in 1971 to 1973 with micro-mutants - variety Nota, 4 times X-rayed and the naked barley strain 1606 treated once with EMS - significant selection results were found. (author)
Full Text Available It is crucial to differentiate patients with temporal lobe epilepsy (TLE from the healthy population and determine abnormal brain regions in TLE. The cortical features and changes can reveal the unique anatomical patterns of brain regions from structural magnetic resonance (MR images. In this study, structural MR images from 41 patients with left TLE, 34 patients with right TLE, and 58 normal controls (NC were acquired, and four kinds of cortical measures, namely cortical thickness, cortical surface area, gray matter volume (GMV, and mean curvature, were explored for discriminative analysis. Three feature selection methods including the independent sample t-test filtering, the sparse-constrained dimensionality reduction model (SCDRM, and the support vector machine-recursive feature elimination (SVM-RFE were investigated to extract dominant features among the compared groups for classification using the support vector machine (SVM classifier. The results showed that the SVM-RFE achieved the highest performance (most classifications with more than 84% accuracy, followed by the SCDRM, and the t-test. Especially, the surface area and GMV exhibited prominent discriminative ability, and the performance of the SVM was improved significantly when the four cortical measures were combined. Additionally, the dominant regions with higher classification weights were mainly located in the temporal and the frontal lobe, including the entorhinal cortex, rostral middle frontal, parahippocampal cortex, superior frontal, insula, and cuneus. This study concluded that the cortical features provided effective information for the recognition of abnormal anatomical patterns and the proposed methods had the potential to improve the clinical diagnosis of TLE.
MSc. Halit Karaxha
Full Text Available Selecting the adequate method for promotion has a huge importance in increasing business’s performance. Selecting the method of the budget depends from a number of factors. The formulation of budget is known as the most critical period which requires special analysis from marketing’s managers. The expenses for promotion are usually high, and every investment made in the field of promotion directly influences in the business situation. Thus, the selection and adequate formulation of budget methods for promotion influences the growth of profit. The allocated amount for promotion depends from a number of factors, such as: the size of the firm, the sector in which it operates, competition etc. After planning the budget, we have to do the budget allocation to select the promotional form which is considered to be successful by the firms in promoting the products and services and that will help the company to connect with its clients. In this paper, I have elaborated the role and importance of the preparation and selection of budget methods for promotion in the theoretical aspect and the practical one as well.
Eren Halil ÖZBERK
Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.
Snel, H. [Netherlands Energy Research Foundation ECN, Renewable Energy, Wind Energy (Netherlands)
Recently the Blade Element Momentum (BEM) method has been made more versatile. Inclusion of rotational effects on time averaged profile coefficients have improved its achievements for performance calculations in stalled flow. Time dependence as a result of turbulent inflow, pitching actions and yawed operation is now treated more correctly (although more improvement is needed) than before. It is of interest to note that adaptations in modelling of unsteady or periodic induction stem from qualitative and quantitative insights obtained from free vortex models. Free vortex methods and further into the future Navier Stokes (NS) calculations, together with wind tunnel and field experiments, can be very useful in enhancing the potential of BEM for aero-elastic response calculations. It must be kept in mind however that extreme caution must be used with free vortex methods, as will be discussed in the following chapters. A discussion of the shortcomings and the strength of BEM and of vortex wake models is given. Some ideas are presented on how BEM might be improved without too much loss of efficiency. (EG)
Kekre, H.B.; Madan, V.K.; Bairi, B.R.
A novel method for the selection of a smoothing interval for the widely used Savitzky and Golay's smoothing filter is proposed. Complementary bandwidths for the nuclear spectral data and the smoothing filter are defined. The criterion for the selection of smoothing interval is based on matching the bandwidths of the spectral data to the filter. Using the above method five real observed spectral peaks of different full width at half maximum, viz. 23.5, 19.5, 17, 8.5 and 6.5 channels, were smoothed and the results are presented. (orig.)
The selection of neutral pions and etas with a high purity while maintaining also a high efficiency can be important in the formation of statistically significant mass spectra in the reconstruction of short-lived particles such as the omega meson (ω→π + + π - + π 0 ). In this study a Ranking method has been optimized for data from the ALEPH Experiment, CERN. The results show that the Ranking method, when applied to high multiplicity events, yields significant improvements in the purity of selected pion candidates and facilitates the relaxation of standard cuts thereby avoiding some systematic uncertainties
A novel method for production of nuclear polarized negative hydrogen ions by selective neutralization with a laser of negative hydrogen ions in a magnetic field is described. This selectivity is possible since a final state of the neutralized atom, and hence the neutralization energy, depends on its nuclear polarization. The main advantages of this scheme are the availability of multi-ampere negative ion sources and the possibility of neutralizing negative ions with very high efficiency. An assessment of the required laser power indicates that this method is in principle feasible with today's technology
Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David
To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP
Çalışkan, Halil; Kurşuncu, Bilal; Kurbanoğlu, Cahit; Güven, Şevki Yılmaz
Highlights: ► The material selection problem for tool holders used in hard milling was solved. ► EXPROM2, TOPSIS and VIKOR methods were used for ranking the alternative materials. ► The weighting of criteria was performed by compromised weighting method. ► The best material for the tool holder was selected as Fe–5Cr–Mo–V aircraft steel. -- Abstract: Nowadays machining of materials in their hardened state, also called hard machining, is a challenge in production of tools and molds. It has some advantages such as lower process time and lower manufacturing cost when compared to conventional machining. In machining of hard workpiece materials, however, very high stresses act on the tool holder through the cutting tool. These stresses necessitate the tool holder to have some specific properties. Especially in hard milling, the tool holder should have high stiffness and should be able to dissipate the energy generated during interrupted cutting. Material cost of the tool holder is also important since lower costs provide a competitive advantage for manufacturers. The material selection for the tool holder should be conducted considering aforementioned requirements. To tackle the difficulty of the material selection with specific properties from a large number of alternatives, multi-criteria decision-making (MCDM) methods have been used. In this paper a decision model including extended PROMETHEE II (EXPROM2) (preference ranking organization method for enrichment evaluation), TOPSIS (technique for order performance by similarity to ideal solution) and VIKOR (VIšekriterijumsko KOmpromisno Rangiranje) methods were used for the selection of the best material for the tool holder used in hard milling. The criteria weighting was performed by compromised weighting method composed of AHP (analytic hierarchy process) and Entropy methods. The candidate materials were ranked by using these methods and the results obtained by each method were compared. It was confirmed
Abdel Samee Nagwan M
Full Text Available Abstract Background Discovering new biomarkers has a great role in improving early diagnosis of Hepatocellular carcinoma (HCC. The experimental determination of biomarkers needs a lot of time and money. This motivates this work to use in-silico prediction of biomarkers to reduce the number of experiments required for detecting new ones. This is achieved by extracting the most representative genes in microarrays of HCC. Results In this work, we provide a method for extracting the differential expressed genes, up regulated ones, that can be considered candidate biomarkers in high throughput microarrays of HCC. We examine the power of several gene selection methods (such as Pearson’s correlation coefficient, Cosine coefficient, Euclidean distance, Mutual information and Entropy with different estimators in selecting informative genes. A biological interpretation of the highly ranked genes is done using KEGG (Kyoto Encyclopedia of Genes and Genomes pathways, ENTREZ and DAVID (Database for Annotation, Visualization, and Integrated Discovery databases. The top ten genes selected using Pearson’s correlation coefficient and Cosine coefficient contained six genes that have been implicated in cancer (often multiple cancers genesis in previous studies. A fewer number of genes were obtained by the other methods (4 genes using Mutual information, 3genes using Euclidean distance and only one gene using Entropy. A better result was obtained by the utilization of a hybrid approach based on intersecting the highly ranked genes in the output of all investigated methods. This hybrid combination yielded seven genes (2 genes for HCC and 5 genes in different types of cancer in the top ten genes of the list of intersected genes. Conclusions To strengthen the effectiveness of the univariate selection methods, we propose a hybrid approach by intersecting several of these methods in a cascaded manner. This approach surpasses all of univariate selection methods when
Chien Hung WEI
Full Text Available Taiwan is involved in intelligent transportation systems planning, and is now selecting its prior focus areas for investment and development. The high social and economic impact associated with which intelligent transportation systems technology are chosen explains the efforts of various electronics and transportation corporations for developing intelligent transportation systems technology to expand their business opportunities. However, there has been no detailed research conducted with regard to selecting technology for advanced public transportation systems in Taiwan. Thus, the present paper demonstrates a grey statistics method integrated with a scenario method for solving the problem of selecting advanced public transportation systems technology for Taiwan. A comprehensive questionnaire survey was conducted to demonstrate the effectiveness of the grey statistics method. The proposed approach indicated that contactless smart card technology is the appropriate technology for Taiwan to develop in the near future. The significance of our research results implies that the grey statistics method is an effective method for selecting advanced public transportation systems technologies. We feel our information will be beneficial to the private sector for developing an appropriate intelligent transportation systems technology strategy.
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
Full Text Available At present, the green building and housing industrialization are two mainstream directions in the real estate industry. The production of green building parts which combines green building and housing industrialization, two concepts, is to be vigorously developed. The key of quality assurance in the assembly project is choosing reliable and proper green building parts suppliers. This paper analyzes the inherent requirements of the green building, combined with the characteristics of the housing industrialization, and puts forward an evaluation index system of supplier selection for green building parts, which includes product index, enterprise index, green development index, and cooperation ability index. To reduce the influence of subjective factors, the improved method which merges Kent index method and catastrophe theory is applied to the green building parts supplier selection and evaluation. This paper takes the selection of the unit bathroom suppliers as an example, uses the improved model to calculate and analyze the data of each supplier, and finally selects the optimal supplier. With combination of the Kent index and the catastrophe theory, the result shows that it can effectively reduce the subjectivity of the evaluation and provide a basis for the selection of the green building parts suppliers.
Heleen L. P. Mees
Full Text Available Policy instruments can help put climate adaptation plans into action. Here, we propose a method for the systematic assessment and selection of policy instruments for stimulating adaptation action. The multi-disciplinary set of six assessment criteria is derived from economics, policy, and legal studies. These criteria are specified for the purpose of climate adaptation by taking into account four challenges to the governance of climate adaptation: uncertainty, spatial diversity, controversy, and social complexity. The six criteria and four challenges are integrated into a step-wise method that enables the selection of instruments starting from a generic assessment and ending with a specific assessment of policy instrument mixes for the stimulation of a specific adaptation measure. We then apply the method to three examples of adaptation measures. The method's merits lie in enabling deliberate choices through a holistic and comprehensive set of adaptation specific criteria, as well as deliberative choices by offering a stepwise method that structures an informed dialog on instrument selection. Although the method was created and applied by scientific experts, policy-makers can also use the method.
Wickman, Jonas; Diehl, Sebastian; Blasius, Bernd; Klausmeier, Christopher A; Ryabov, Alexey B; Brännström, Åke
Spatial structure can decisively influence the way evolutionary processes unfold. To date, several methods have been used to study evolution in spatial systems, including population genetics, quantitative genetics, moment-closure approximations, and individual-based models. Here we extend the study of spatial evolutionary dynamics to eco-evolutionary models based on reaction-diffusion equations and adaptive dynamics. Specifically, we derive expressions for the strength of directional and stabilizing/disruptive selection that apply both in continuous space and to metacommunities with symmetrical dispersal between patches. For directional selection on a quantitative trait, this yields a way to integrate local directional selection across space and determine whether the trait value will increase or decrease. The robustness of this prediction is validated against quantitative genetics. For stabilizing/disruptive selection, we show that spatial heterogeneity always contributes to disruptive selection and hence always promotes evolutionary branching. The expression for directional selection is numerically very efficient and hence lends itself to simulation studies of evolutionary community assembly. We illustrate the application and utility of the expressions for this purpose with two examples of the evolution of resource utilization. Finally, we outline the domain of applicability of reaction-diffusion equations as a modeling framework and discuss their limitations.
Full Text Available Complex nature of machining processes requires the use of different methods and techniques for process optimization. Over the past few years a number of different optimization methods have been proposed for solving continuous machining optimization problems. In manufacturing environment, engineers are also facing a number of discrete machining optimization problems. In order to help decision makers in solving this type of optimization problems a number of multi criteria decision making (MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. performance selection index (PSI method for solving machining MCDM problems. The main motivation for using the PSI method is that it is not necessary to determine criteria weights as in other MCDM methods. Applicability and effectiveness of the PSI method have been demonstrated while solving two case studies dealing with machinability of materials and selection of the most suitable cutting fluid for the given machining application. The obtained rankings have good correlation with those derived by the past researchers using other MCDM methods which validate the usefulness of this method for solving machining MCDM problems.
Full Text Available Most of the economical, industrial, financial or political decision problems are multi-criteria. In these multi criteria problems, optimal selection of alternatives is hard and complex process. Recently, some kinds of methods are improved to solve these problems. Promethee is one of most efficient and easiest method and solves problems that consist quantitative criteria. However, in daily life, there are criteria which are explained as linguistic and cannot modeled numerical. Hence, Promethee method is incomplete for linguistic criteria which are imprecise. To satisfy this deficiency, fuzzy set approximation can be used. Promethee method, which is extended with using fuzzy inputs, is applied to car selection for seven different cars in same class by using criteria: price, fuel, performance and security. The obtained results are appropriate and consistent.
Jade, A M; Jayaraman, V K; Kulkarni, B D
A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)
Baty, J D; Playfer, J; Evans, D A; Lamb, J
Plasma samples from volunteers who had received an oral dose of acetanilide have been analysed by gas chromatography mass spectrometry and ultraviolet absorption techniques. The gas chromatography mass spectrometry method involved extraction of the plasma and analysis of the acetanilide using selected ion monitoring with a deuterated internal standard. In the ultraviolet method the plasma was hydrolysed with acid to convert the acetanilide to aniline, and this compound was diazotized and coupled with N-1-naphthylethylene-diamine. The absorbance of the resulting complex was read at 550 nm. Acetanilide levels in plasma determined by the selected ion monitoring method were significantly lower than those measured by spectrophotometry. Pharmacokinetic data calculated from the results obtained using these two assays are very different and illustrate the need for an accurate and specific method of analysis. The major metabolites of acetanilide are shown not to interfere with these assays and the results suggest the possible presence of a new metabolite of acetanilide.
Li, Zhixing; Zhang, Haihua; Li, Suhe
To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.
Highlights: • Boron based coating selection problem for cutting tools was solved. • EXPROM2, TOPSIS and VIKOR methods were used for ranking the alternative materials. • The best coatings for cutting tool were selected as TiBN and TiSiBN. • The ranking results are in good agreement with cutting test results in literature. - Abstract: Mechanical and tribological properties of hard coatings can be enhanced using boron as alloying element. Therefore, multicomponent nanostructured boron based hard coatings are deposited on cutting tools by different methods at different parameters. Different mechanical and tribological properties are obtained after deposition, and it is a difficult task to select the best coating material. In this paper, therefore, a systematic evaluation model was proposed to tackle the difficulty of the material selection with specific properties among a set of available alternatives. The alternatives consist of multicomponent nanostructured TiBN, TiCrBN, TiSiBN and TiAlSiBN coatings deposited by magnetron sputtering and ion implantation assisted magnetron sputtering at different parameters. The alternative coating materials were ranked by using three multi-criteria decision-making (MCDM) methods, i.e. EXPROM2 (preference ranking organization method for enrichment evaluation), TOPSIS (technique for order performance by similarity to ideal solution) and VIKOR (VIšekriterijumsko KOmpromisno Rangiranje), in order to determine the best coating material for cutting tools. Hardness (H), Young’s modulus (E), elastic recovery, friction coefficient, critical load, H/E and H 3 /E 2 ratios were considered as material selection criteria. In order to determine the importance weights of the evaluation criteria, a compromised weighting method, which composes of the analytic hierarchy process and Entropy methods, were used. The ranking results showed that TiBN and TiSiBN coatings deposited at given parameters are the best coatings for cutting tools
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in
Xie Hongyun; Chen Liang; Shen Pei; Sun Botao; Wang Renqing; Xiao Ying; You Yunxia; Zhang Wanrong
A novel cascaded DFB laser, which consists of two serial gratings to provide selectable wavelengths, is presented and analyzed by the transfer matrix method. In this method, efficient facet reflectivity is derived from the transfer matrix built for each serial section and is then used to simulate the performance of the novel cascaded DFB laser through self-consistently solving the gain equation, the coupled wave equation and the current continuity equations. The simulations prove the feasibility of this kind of wavelength selectable laser and a corresponding designed device with two selectable wavelengths of 1.51 μm and 1.53 μm is realized by experiments on InP-based multiple quantum well structure. (semiconductor devices)
Partila, Pavol; Voznak, Miroslav; Tovarek, Jaromir
The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency.
Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.
Full Text Available The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency.
Aleksandrov, G N; Olorokov, V P
A method is presented, based on both technical and economic considerations, for selecting cross arm lengths for intermediate poles of power transmission lines according to the effects of internal overvoltage, methods from probability theory and mathematical statistics employed. The problem of optimum pole size is considered in terms of the effect of internal overvoltages for a prescribed maximum level of 2.1 PU currently used in the USSR for the design of 750 kV lines.
Pires, M.A.F.; Bellintani, S.A.
A simple, fast and sensitive method is outlined for determining fluoride in urine of workers who handle fluoride compounds. The determination is based on the measurement of fluoride by ion selective electrode. Cationic interferents like Ca ++ , Mg ++ , Fe +++ and Al +++ are complexed by EDTA and citric acid. Common anions present in urine, such as Cl - , PO --- 4 and SO -- 4 do not interfere in the method. (Author) [pt
Tapas Kumar Biswas
Full Text Available The mobility sector including all kinds of transportation systems are facing global challenges in re-spect of green environmental issues. There has been a paradigm shift in the concept of design and manufacturing of automotive vehicles keeping in mind the scarcity of fossil fuel and the impact of emission on environment due to burning of it. The addition of hybrid and electric vehicles in pas-senger car segment has got significant momentum to address the global challenges. This research investigates the performance of a group of hybrid vehicles from customers’ perspective. Among the different brands that are available in the hybrid vehicle market, smart customers have given pri-ority to vehicle cost, mileage, tail pipe emission, comfortness and high tank size volume for long drive. Considering these attributes, selection strategy for hybrid vehicles has been developed using entropy based multi-attributive border approximation area comparison (MABAC method. This research highlights the best hybrid vehicle which reduces air pollution in cities with other significant environmental benefits, reduces dependence on foreign energy imports and minimizes the annual fuel cost.
Dale R. Weigel; George R. Parker
Tree regeneration response following the use of the group selection method was studied within 36 group openings on the Naval Surface Warfare Center, Crane Division in south central Indiana. Two different aspects and three time periods since cutting were examined. The objectives were to determine whether aspect, age, species group, location within the opening, or their...
Jackson, D.D.; Hollen, R.M.; Roensch, F.R.; Rein, J.E.
A highly selective, controlled-potential coulometric method has been developed for the determination of plutonium. An automated instrument, consisting of commercial electronic components under control of a programmable calculator, is being constructed. Half-cell potentials and interfering anions are listed
Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald
The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Selection of training methods and programs. 617.23 Section 617.23 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR... for which training is undertaken shall not preclude the development of an individual retraining...
Bobrovnikov, S. M.; Gorlov, E. V.; Zharkov, V. I.
A technique for increasing the selectivity of the method of detecting high-energy materials (HEMs) based on laser fragmentation of HEM molecules with subsequent laser excitation of fluorescence of the characteristic NO fragments from the first vibrational level of the ground state is suggested.
Conclusions: The negative selection method to identify AdEasy adenovirus recombinants by colony PCR can identify the recombined colony within a short time-period, and maximally avoid damage to the recombinant plasmid by limiting recombination time, resulting in improved adenovirus packaging.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Borghi, E.; Onis, M. de; Garza, C.; Broeck, J. van den; Frongillo, E.A.; Grummer-Strawn, L.; Buuren, S. van; Pan, H.; Molinari, L.; Martorell, R.; Onyango, A.W.; Martines, J.C.; Pinol, A.; Siyam, A.; Victoria, C.G.; Bhan, M.K.; Araújo, C.L.; Lartey, A.; Owusu, W.B.; Bhandari, N.; Norum, K.R.; Bjoerneboe, G.-E.Aa.; Mohamed, A.J.; Dewey, K.G.; Belbase, K.; Chumlea, C.; Cole, T.; Shrimpton, R.; Albernaz, E.; Tomasi, E.; Cássia Fossati da Silveira, R. de; Nader, G.; Sagoe-Moses, I.; Gomez, V.; Sagoe-Moses, C.; Taneja, S.; Rongsen, T.; Chetia, J.; Sharma, P.; Bahl, R.; Baerug, A.; Tufte, E.; Alasfoor, D.; Prakash, N.S.; Mabry, R.M.; Al Rajab, H.J.; Helmi, S.A.; Nommsen-Rivers, L.A.; Cohen, R.J.; Heinig, M.J.
The World Health Organization (WHO), in collaboration with a number of research institutions worldwide, is developing new child growth standards. As part of a broad consultative process for selecting the best statistical methods, WHO convened a group of statisticians and child growth experts to
Pires, M.A.F.; Bellintani, S.A.
A simple, fast and sensible method is outlined for the determination of fluoride in urine of workers that handle fluorine compounds. The determination is based on the measurement of fluoride by ion selective electrode. Cationic interference like Ca ++ , Mg ++ , Fe +++ and Al +++ are complexed by EDTA and citric acid. (Author) [pt
Forsblom, Lara; Negrini, Lucio; Gurtner, Jean-Luc; Schumann, Stephan
In the Swiss vocational education system, which is often called a "Dual System", trainees enter into an apprenticeship contract with a training company. On average, 25% of those contracts are terminated prematurely (PCT). This article examines the relationship between training companies' selection methods and PCTs. The investigation is…
Kriens, J.; Strijbosch, L.W.G.; Vörös, J.
The set of efficient (Rho2)-combinations in the (Rho2)-plane of the Markowitz portfolio selection method consists of a series of strictly convex parabola. In the transition points from one parabola to the next one, the curve may be indifferentiable. The article gives necessary and sufficient
Hermann, Robert; Baumgartner, Rupert J; Vorbach, Stefan; Ragossnig, Arne; Pomberger, Roland
For the first time in Austria, fundamental technological and economic studies on recovering secondary raw materials from large landfills have been carried out, based on the 'LAMIS - Landfill Mining Austria' pilot project. A main focus of the research - and the subject of this article - was to develop an assessment or decision-making procedure that allows landfill owners to thoroughly examine the feasibility of a landfill mining project in advance. Currently there are no standard procedures that would sufficiently cover all the multiple-criteria requirements. The basic structure of the multiple attribute decision making process was used to narrow down on selection, conceptual design and assessment of suitable procedures. Along with a breakdown into preliminary and main assessment, the entire foundation required was created, such as definitions of requirements to an assessment method, selection and accurate description of the various assessment criteria and classification of the target system for the present 'landfill mining' vs. 'retaining the landfill in after-care' decision-making problem. Based on these studies, cost-utility analysis and the analytical-hierarchy process were selected from the range of multiple attribute decision-making procedures and examined in detail. Overall, both methods have their pros and cons with regard to their use for assessing landfill mining projects. Merging these methods or connecting them with single-criteria decision-making methods (like the net present value method) may turn out to be reasonable and constitute an appropriate assessment method. © The Author(s) 2015.
Full Text Available Abstract Background Accurate diagnosis of cancer subtypes remains a challenging problem. Building classifiers based on gene expression data is a promising approach; yet the selection of non-redundant but relevant genes is difficult. The selected gene set should be small enough to allow diagnosis even in regular clinical laboratories and ideally identify genes involved in cancer-specific regulatory pathways. Here an entropy-based method is proposed that selects genes related to the different cancer classes while at the same time reducing the redundancy among the genes. Results The present study identifies a subset of features by maximizing the relevance and minimizing the redundancy of the selected genes. A merit called normalized mutual information is employed to measure the relevance and the redundancy of the genes. In order to find a more representative subset of features, an iterative procedure is adopted that incorporates an initial clustering followed by data partitioning and the application of the algorithm to each of the partitions. A leave-one-out approach then selects the most commonly selected genes across all the different runs and the gene selection algorithm is applied again to pare down the list of selected genes until a minimal subset is obtained that gives a satisfactory accuracy of classification. The algorithm was applied to three different data sets and the results obtained were compared to work done by others using the same data sets Conclusion This study presents an entropy-based iterative algorithm for selecting genes from microarray data that are able to classify various cancer sub-types with high accuracy. In addition, the feature set obtained is very compact, that is, the redundancy between genes is reduced to a large extent. This implies that classifiers can be built with a smaller subset of genes.
Abate-Pella, Daniel; Freund, Dana M; Slovin, Janet P; Hegeman, Adrian D; Cohen, Jerry D
Carotenoids are a large class of compounds that are biosynthesized by condensation of isoprene units in plants, fungi, bacteria, and some animals. They are characteristically highly conjugated through double bonds, which lead to many isomers as well susceptibility to oxidation and other chemical modifications. Carotenoids are important because of their potent antioxidant activity and are the pigments responsible for color in a wide variety of foods. Human consumption is correlated to many health benefits including prevention of cancer, cardiovascular disease, and age-related disease. Extreme hydrophobicity, poor stability, and low concentration in biological samples make these compounds difficult to analyze and difficult to develop analytical methods for aimed towards identification and quantification. Examples in the literature frequently report the use of exotic stationary phases, solvents, and additives, such as ethyl acetate, dichloromethane, and methyl tert-butyl ether that are incompatible with liquid chromatography mass spectrometry (LC-MS). In order to address these issues, we implemented the use of LC-MS friendly conditions using a low-hydrophobicity cyano-propyl column (Agilent Zorbax SB-CN). We successfully differentiated between isomeric carotenoids by optimizing two gradient methods and using a mixture of 11 standards and LC-MS in positive ionization mode. Three complex biological samples from strawberry leaf, chicken feed supplement, and the photosynthetic bacterium Chloroflexus aurantiacus were analyzed and several carotenoids were resolved in these diverse backgrounds. Our results show this methodology is a significant improvement over other alternatives for analyzing carotenoids because of its ease of use, rapid analysis time, high selectivity, and, most importantly, its compatibility with typical LC-MS conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhu, Qingjun; Song, Fengquan; Ren, Jie; Chen, Xueyong; Zhou, Bin
To further expand the application of an artificial neural network in the field of neutron spectrometry, the criteria for choosing between an artificial neural network and the maximum entropy method for the purpose of unfolding neutron spectra was presented. The counts of the Bonner spheres for IAEA neutron spectra were used as a database, and the artificial neural network and the maximum entropy method were used to unfold neutron spectra; the mean squares of the spectra were defined as the differences between the desired and unfolded spectra. After the information entropy of each spectrum was calculated using information entropy theory, the relationship between the mean squares of the spectra and the information entropy was acquired. Useful information from the information entropy guided the selection of unfolding methods. Due to the importance of the information entropy, the method for predicting the information entropy using the Bonner spheres' counts was established. The criteria based on the information entropy theory can be used to choose between the artificial neural network and the maximum entropy method unfolding methods. The application of an artificial neural network to unfold neutron spectra was expanded. - Highlights: • Two neutron spectra unfolding methods, ANN and MEM, were compared. • The spectrum's entropy offers useful information for selecting unfolding methods. • For the spectrum with low entropy, the ANN was generally better than MEM. • The spectrum's entropy was predicted based on the Bonner spheres' counts
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Patterson, Fiona; Lopes, Safiatu; Harding, Stephen; Vaux, Emma; Berkin, Liz; Black, David
The aim of this study was to follow up a sample of physicians who began core medical training (CMT) in 2009. This paper examines the long-term validity of CMT and GP selection methods in predicting performance in the Membership of Royal College of Physicians (MRCP(UK)) examinations. We performed a longitudinal study, examining the extent to which the GP and CMT selection methods (T1) predict performance in the MRCP(UK) examinations (T2). A total of 2,569 applicants from 2008-09 who completed CMT and GP selection methods were included in the study. Looking at MRCP(UK) part 1, part 2 written and PACES scores, both CMT and GP selection methods show evidence of predictive validity for the outcome variables, and hierarchical regressions show the GP methods add significant value to the CMT selection process. CMT selection methods predict performance in important outcomes and have good evidence of validity; the GP methods may have an additional role alongside the CMT selection methods. © Royal College of Physicians 2017. All rights reserved.
Zhang, Xiaohan; Yang, Guang; Yang, Yongbo; Huang, Junhua
While hyperspectral data shares rich spectrum information, it has numbers of bands with high correlation coefficients, causing great data redundancy. A reasonable band selection is important for subsequent processing. Bands with large amount of information and low correlation should be selected. On this basis, according to the needs of target detection applications, the spectral characteristics of the objects of interest are taken into consideration in this paper, and a new method based on spectrum difference is proposed. Firstly, according to the spectrum differences of targets of interest, a difference matrix which represents the different spectral reflectance of different targets in different bands is structured. By setting a threshold, the bands satisfying the conditions would be left, constituting a subset of bands. Then, the correlation coefficients between bands are calculated and correlation matrix is given. According to the size of the correlation coefficient, the bands can be set into several groups. At last, the conception of normalized variance is used on behalf of the information content of each band. The bands are sorted by the value of its normalized variance. Set needing number of bands, and the optimum band combination solution can be get by these three steps. This method retains the greatest degree of difference between the target of interest and is easy to achieve by computer automatically. Besides, false color image synthesis experiment is carried out using the bands selected by this method as well as other 3 methods to show the performance of method in this paper.
Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.
Khairan D. Rajab
Full Text Available Phishing is one of the serious web threats that involves mimicking authenticated websites to deceive users in order to obtain their financial information. Phishing has caused financial damage to the different online stakeholders. It is massive in the magnitude of hundreds of millions; hence it is essential to minimize this risk. Classifying websites into “phishy” and legitimate types is a primary task in data mining that security experts and decision makers are hoping to improve particularly with respect to the detection rate and reliability of the results. One way to ensure the reliability of the results and to enhance performance is to identify a set of related features early on so the data dimensionality reduces and irrelevant features are discarded. To increase reliability of preprocessing, this article proposes a new feature selection method that combines the scores of multiple known methods to minimize discrepancies in feature selection results. The proposed method has been applied to the problem of website phishing classification to show its pros and cons in identifying relevant features. Results against a security dataset reveal that the proposed preprocessing method was able to derive new features datasets which when mined generate high competitive classifiers with reference to detection rate when compared to results obtained from other features selection methods.
Cadwallader, L.C.; Djerassi, H.; Lampin, I.
This paper presents a comparison of the varying methods used to identify and select accident-initiating events for safety analysis and probabilistic risk assessment (PRA). Initiating events are important in that they define the extent of a given safety analysis or PRA. Comprehensiveness in identification and selection of initiating events is necessary to ensure that a thorough analysis is being performed. While total completeness cannot ever be realized, inclusion of all safety significant events can be attained. The European approach to initiating event identification and selection arises from within a newly developed Safety Analysis methodology framework. This is a functional approach, with accident initiators based on events that will cause a system or facility loss of function. The US method divides accident initiators into two groups, internal accident initiators into two groups, internal and external events. Since traditional US PRA techniques are applied to fusion facilities, the recommended PRA-based approach is a review of historical safety documents coupled with a facility-level Master Logic Diagram. The US and European methods are described, and both are applied to a proposed International Thermonuclear Experiment Reactor (ITER) Magnet System in a sample problem. Contrasts in the US and European methods are discussed. Within their respective frameworks, each method can provide the comprehensiveness of safety-significant events needed for a thorough analysis. 4 refs., 8 figs., 11 tabs
Tahayori, B; Khaneja, N; Johnston, L A; Farrell, P M; Mareels, I M Y
The design of slice selective pulses for magnetic resonance imaging can be cast as an optimal control problem. The Fourier synthesis method is an existing approach to solve these optimal control problems. In this method the gradient field as well as the excitation field are switched rapidly and their amplitudes are calculated based on a Fourier series expansion. Here, we provide a novel insight into the Fourier synthesis method via representing the Bloch equation in spherical coordinates. Based on the spherical Bloch equation, we propose an alternative sequence of pulses that can be used for slice selection which is more time efficient compared to the original method. Simulation results demonstrate that while the performance of both methods is approximately the same, the required time for the proposed sequence of pulses is half of the original sequence of pulses. Furthermore, the slice selectivity of both sequences of pulses changes with radio frequency field inhomogeneities in a similar way. We also introduce a measure, referred to as gradient complexity, to compare the performance of both sequences of pulses. This measure indicates that for a desired level of uniformity in the excited slice, the gradient complexity for the proposed sequence of pulses is less than the original sequence. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Full Text Available The process of developing a supply chain is long and complex. with many restrictions and obstacles that accompany it. In this paper the authors focus on the first stage in developing the supply chain and the selection process and selection of partners. This phase of the development significantly affect the competitive position of the supply chain and create value for the consumer. Selected partners or 'links' of the supply chain influence the future performance of the chain which points to the necessity of full commitment to this process. The process of selection and choice of partner is conditioned by the key criteria that are used on that occasion. The use of inadequate criteria may endanger the whole process of building a supply chain partner selection through inadequate future supply chain needs. This paper is an analysis of partner selection based on key criteria used by managers in Serbia. For this purpose we used the AHP method. the results show that these are the top ranked criteria in terms of managers.
The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.
Full Text Available A novel method of fault line selection based on IOS is presented. Firstly, the IOS is established by using math model, which adopted TZSC signal to replace built-in signal of duffing chaotic oscillator by selecting appropriate parameters. Then, each line’s TZSC decomposed by db10 wavelet packet to get CFB with the maximum energy principle, and CFB was solved by IOS. Finally, maximum chaotic distance and average chaotic distance on the phase trajectory are used to judge fault line. Simulation results show that the proposed method can accurately judge fault line and healthy line in strong noisy background. Besides, the nondetection zones of proposed method are elaborated.
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Fraley, D.W.; Burnham, J.B.
This report describes the methods used to probabilistically analyze data related to the uranium supply the FBR's competitive dates, development strategies' time and costs, and economic benefits. It also describes the econometric methods used to calculate the economic risks of mistiming the development. Seven strategies for developing the FBR are analyzed. The various measures of a strategy's performance - timing, costs, benefits, and risks - are combined into several criteria which are used to evaluate the seven strategies. Methods are described for selecting a strategy based on a number of alternative criteria
This document describes the process that the US Department of Energy (DOE), Richland Operations Office (RL) and contractor laboratories use to select appropriate or develop new or modified analytical methods. These methods are needed to provide reliable mixed waste characterization data that meet project-specific quality assurance (QA) requirements while also meeting health and safety standards for handling radioactive materials. This process will provide the technical basis for DOE's analysis of mixed waste and support requests for regulatory approval of these new methods when they are used to satisfy the regulatory requirements of the Hanford Federal Facility Agreement and Consent Order (Tri-party Agreement) (Ecology et al. 1992)
Kumasaki, H.; Shinozaki, Y.
The X-ray fluorescence method using a small source of 55 Fe was examined and found to be applicable for the selection of polychlorinated plastics from plastic waste in model areas in Tokyo designated for investigating their content in the waste. The weight ratios of soft and hard polychlorinated plastics to the total plastic waste estimated by this method were found to be 15.6% and 0.29% respectively. These values agree well with the results obtained with the Beilstein method. (author)
Full Text Available Information system (IS project selection is of critical importance to every organization in dynamic competing environment. The aim of this paper is to develop a hybrid multicriteria group decision making approach based on intuitionistic fuzzy theory for IS project selection. The decision makers’ assessment information can be expressed in the form of real numbers, interval-valued numbers, linguistic variables, and intuitionistic fuzzy numbers (IFNs. All these evaluation pieces of information can be transformed to the form of IFNs. Intuitionistic fuzzy weighted averaging (IFWA operator is utilized to aggregate individual opinions of decision makers into a group opinion. Intuitionistic fuzzy entropy is used to obtain the entropy weights of the criteria. TOPSIS method combined with intuitionistic fuzzy set is proposed to select appropriate IS project in group decision making environment. Finally, a numerical example for information system projects selection is given to illustrate application of hybrid multi-criteria group decision making (MCGDM method based on intuitionistic fuzzy theory and TOPSIS method.
Full Text Available The purpose of stock portfolio selection is how to allocate the capital to a large number of stocks in order to bring a most profitable return for investors. In most of past literatures, experts considered the portfolio of selection problem only based on past crisp or quantitative data. However, many qualitative and quantitative factors will influence the stock portfolio selection in real investment situation. It is very important for experts or decision-makers to use their experience or knowledge to predict the performance of each stock and make a stock portfolio. Because of the knowledge, experience, and background of each expert are different and vague, different types of 2-tuple linguistic variable are suitable used to express experts' opinions for the performance evaluation of each stock with respect to criteria. According to the linguistic evaluations of experts, the linguistic TOPSIS and linguistic ELECTRE methods are combined to present a new decision-making method for dealing with stock selection problems in this paper. Once the investment set has been determined, the risk preferences of investor are considered to calculate the investment ratio of each stock in the investment set. Finally, an example is implemented to demonstrate the practicability of the proposed method.
Song, Yun S; Steinrücken, Matthias
The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.
Full Text Available Abstract Background Several cis-acting regulatory sequences functioning at the level of mRNA or nascent peptide and specifically influencing transcription or translation have been described. These regulatory elements often respond to specific chemicals. Results We have developed a method that allows us to select cis-acting regulatory sequences that respond to diverse chemicals. The method is based on the β-lactamase gene containing a random sequence inserted into the beginning of the ORF. Several rounds of selection are used to isolate sequences that suppress β-lactamase expression in response to the compound under study. We have isolated sequences that respond to erythromycin, troleandomycin, chloramphenicol, meta-toluate and homoserine lactone. By introducing synonymous and non-synonymous mutations we have shown that at least in the case of erythromycin the sequences act at the peptide level. We have also tested the cross-activities of the constructs and found that in most cases the sequences respond most strongly to the compound on which they were isolated. Conclusions Several selected peptides showed ligand-specific changes in amino acid frequencies, but no consensus motif could be identified. This is consistent with previous observations on natural cis-acting peptides, showing that it is often impossible to demonstrate a consensus. Applying the currently developed method on a larger scale, by selecting and comparing an extended set of sequences, might allow the sequence rules underlying the activity of cis-acting regulatory peptides to be identified.
Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.
Muddukrishna, B S; Pai, Vasudev; Lobo, Richard; Pai, Aravinda
In the present study, five important binary fingerprinting techniques were used to model novel flavones for the selective inhibition of Tankyrase I. From the fingerprints used: the fingerprint atom pairs resulted in a statistically significant 2D QSAR model using a kernel-based partial least square regression method. This model indicates that the presence of electron-donating groups positively contributes to activity, whereas the presence of electron withdrawing groups negatively contributes to activity. This model could be used to develop more potent as well as selective analogues for the inhibition of Tankyrase I. Schematic representation of 2D QSAR work flow.
Neves Haroldo HR
Full Text Available Abstract Background The availability of high-density panels of SNP markers has opened new perspectives for marker-assisted selection strategies, such that genotypes for these markers are used to predict the genetic merit of selection candidates. Because the number of markers is often much larger than the number of phenotypes, marker effect estimation is not a trivial task. The objective of this research was to compare the predictive performance of ten different statistical methods employed in genomic selection, by analyzing data from a heterogeneous stock mice population. Results For the five traits analyzed (W6W: weight at six weeks, WGS: growth slope, BL: body length, %CD8+: percentage of CD8+ cells, CD4+/ CD8+: ratio between CD4+ and CD8+ cells, within-family predictions were more accurate than across-family predictions, although this superiority in accuracy varied markedly across traits. For within-family prediction, two kernel methods, Reproducing Kernel Hilbert Spaces Regression (RKHS and Support Vector Regression (SVR, were the most accurate for W6W, while a polygenic model also had comparable performance. A form of ridge regression assuming that all markers contribute to the additive variance (RR_GBLUP figured among the most accurate for WGS and BL, while two variable selection methods ( LASSO and Random Forest, RF had the greatest predictive abilities for %CD8+ and CD4+/ CD8+. RF, RKHS, SVR and RR_GBLUP outperformed the remainder methods in terms of bias and inflation of predictions. Conclusions Methods with large conceptual differences reached very similar predictive abilities and a clear re-ranking of methods was observed in function of the trait analyzed. Variable selection methods were more accurate than the remainder in the case of %CD8+ and CD4+/CD8+ and these traits are likely to be influenced by a smaller number of QTL than the remainder. Judged by their overall performance across traits and computational requirements, RR
American Society for Testing and Materials. Philadelphia
1.1 This guide describes different mathematical methods that may be used to calculate absorbed dose and criteria for their selection. Absorbed-dose calculations can determine the effectiveness of the radiation process, estimate the absorbed-dose distribution in product, or supplement or complement, or both, the measurement of absorbed dose. 1.2 Radiation processing is an evolving field and annotated examples are provided in Annex A6 to illustrate the applications where mathematical methods have been successfully applied. While not limited by the applications cited in these examples, applications specific to neutron transport, radiation therapy and shielding design are not addressed in this document. 1.3 This guide covers the calculation of radiation transport of electrons and photons with energies up to 25 MeV. 1.4 The mathematical methods described include Monte Carlo, point kernel, discrete ordinate, semi-empirical and empirical methods. 1.5 General purpose software packages are available for the calcul...
Aoki, Takayuki; Kobayashi, Hiroyuki; Higuchi, Shinichi; Shimizu, Sadato
A Ni-base alloy weld, including cracks due to stress corrosion cracking found in the reactor internal of the oldest BWR in Japan, Tsuruga unit 1, in 1999, was examined by three (3) types of UT method. After this examination, a depth of each crack was confirmed by carrying out a little excavation with a grinder and PT examination by turns until each crack disappeared. Then, the depth measured by the former method was compared with the one measured by the latter method. In this fashion, performances of the UT methods were verified. As a result, a combination of the three types of UT method was found to meet the acceptance criteria given by ASME Sec.XI Appendix VIII, Performance Demonstration for Ultrasonic Examination Systems-Supplement 6. In this paper, the results of the UT examination described above and their evaluation are discussed. (author)
Quilty, J.; Adamowski, J. F.
Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.
Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.
We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically
Selected Theoretical Studies Group contributions to the 14th International Cosmic Ray conference. [including studies on galactic molecular hydrogen, interstellar reddening, and on the origin of cosmic rays
The galactic distribution of H2 was studied through gamma radiation and through X-ray, optical, and infrared absorption measurements from SAS-2 and other sources. A comparison of the latitude distribution of gamma-ray intensity with reddening data shows reddening data to give the best estimate of interstellar gas in the solar vicinity. The distribution of galactic cosmic ray nucleons was determined and appears to be identical to the supernova remnant distribution. Interactions between ultrahigh energy cosmic-ray nuclei and intergalactic photon radiation fields were calculated, using the Monte Carlo method.
Nickols-Richardson, Sharon M.; Piehowski, Kathryn E.; Metzgar, Catherine J.; Miller, Debra L.; Preston, Amy G.
BACKGROUND/OBJECTIVES The type of sweet snack incorporated into an energy-restricted diet (ERD) may produce differential effects on metabolic improvements associated with body weight (BW) loss. This study compared effects of incorporating either twice daily energy-controlled dark chocolate snacks plus once daily sugar-free cocoa beverage (DC) to non-chocolate snacks plus sugar-free non-cocoa beverage (NC) into an ERD on BW loss and metabolic outcomes. MATERIALS/METHODS In an 18-week randomize...
Yazid, M.; Aris Bastianudin; Widdi Usada
Selection of the chrome reduction bacteria in the waste of tanning leather industries by ozonization method has been done. The objectives of this research was to obtain isolate bacteria from the waste with chrome contain, so that expected can be used for chrome bioremediation agent for arrange to improved the waste treatment for tanning leather industries. Selection of bacteria in the waste was carried out by ozonization method with time variation 0 to 210 minutes by time interval 15 minutes. Isolation bacteria was carried out was grown on the BHI media for 24 hours at 37°C temperature. So be inoculated by streak plate method on the TBX, MC, EA, CTM and BP media. Characterization of bacteria was done by saw the colonies morphology, sel morphology and biochemical characterization. So, identification of isolate bacteria by matching profile method. The result of this research can be obtained 5 isolate bacteria BCR1, BCR2, BCR3, BCR4 and BCR5 with the different phenotypic character. From the five isolate can be selected resistance ozon isolate until 180 minutes time ozonization were BCR 2, were identified belong to the genus of Bacillus. The examination results showed that the isolate bacteria be able to reduction of the chrome concentration in the waste of tanning leather industries by 71.03 %. Efficiency. (author)
Tian, Han; Li, Ming; Wang, Yue; Sheng, Dinggao; Liu, Jun; Zhang, Linna
Hemoglobin concentration is commonly used in clinical medicine to diagnose anemia, identify bleeding, and manage red blood cell transfusions. The golden standard method for determining hemoglobin concentration in blood requires reagent. Spectral methods were advantageous at fast and non-reagent measurement. However, model calibration with full spectrum is time-consuming. Moreover, it is necessary to use a few variables considering size and cost of instrumentation, especially for a portable biomedical instrument. This study presents different wavelength selection methods for optical wavelengths for total hemoglobin concentration determination in whole blood. The results showed that modelling using only two wavelengths combination (1143 nm, 1298 nm) can keep on the fine predictability with full spectrum. It appears that the proper selection of optical wavelengths can be more effective than using the whole spectra for determination hemoglobin in whole blood. We also discussed the influence of water absorptivity on the wavelength selection. This research provides valuable references for designing portable NIR instruments determining hemoglobin concentration, and may provide some experience for noninvasive hemoglobin measurement by NIR methods.
Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.
Dobos, P.; Tamás, P.; Illés, B.
Adequate establishment and operation of warehouse logistics determines the companies’ competitiveness significantly because it effects greatly the quality and the selling price of the goods that the production companies produce. In order to implement and manage an adequate warehouse system, adequate warehouse position, stock management model, warehouse technology, motivated work force committed to process improvement and material handling strategy are necessary. In practical life, companies have paid small attantion to select the warehouse strategy properly. Although it has a major influence on the production in the case of material warehouse and on smooth costumer service in the case of finished goods warehouse because this can happen with a huge loss in material handling. Due to the dynamically changing production structure, frequent reorganization of warehouse activities is needed, on what the majority of the companies react basically with no reactions. This work presents a simulation test system frames for eligible warehouse material handling strategy selection and also the decision method for selection.
Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.
Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.
Stults, Ian C.
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model
Mohammed, Habiba Ibrahim; Majid, Zulkepli; Yusof, Norhakim Bin; Bello Yamusa, Yamusa
Landfilling remains the most common systematic technique of solid waste disposal in most of the developed and developing countries. Finding a suitable site for landfill is a very challenging task. Landfill site selection process aims to provide suitable areas that will protect the environment and public health from pollution and hazards. Therefore, various factors such as environmental, physical, socio-economic, and geological criteria must be considered before siting any landfill. This makes the site selection process vigorous and tedious because it involves the processing of large amount of spatial data, rules and regulations from different agencies and also policy from decision makers. This allows the incorporation of conflicting objectives and decision maker preferences into spatial decision models. This paper particularly analyzes the multi-criteria evaluation (MCE) method of landfill site selection for solid waste management by means of literature reviews and surveys. The study will help the decision makers and waste management authorities to choose the most effective method when considering landfill site selection.
Chee Keong Kwoh
Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.
Ibrahim Mohammed Habiba
Full Text Available Landfilling remains the most common systematic technique of solid waste disposal in most of the developed and developing countries. Finding a suitable site for landfill is a very challenging task. Landfill site selection process aims to provide suitable areas that will protect the environment and public health from pollution and hazards. Therefore, various factors such as environmental, physical, socio-economic, and geological criteria must be considered before siting any landfill. This makes the site selection process vigorous and tedious because it involves the processing of large amount of spatial data, rules and regulations from different agencies and also policy from decision makers. This allows the incorporation of conflicting objectives and decision maker preferences into spatial decision models. This paper particularly analyzes the multi-criteria evaluation (MCE method of landfill site selection for solid waste management by means of literature reviews and surveys. The study will help the decision makers and waste management authorities to choose the most effective method when considering landfill site selection.
Full Text Available With ever increasing demand for manufactured products of hard alloys and metals with high surface finish and complex shape geometry, more interest is now being paid to non-traditional machining (NTM processes, where energy in its direct form is used to remove material from workpiece surface. Compared to conventional machining processes, NTM processes possess almost unlimited capabilities and there is a strong believe that use of NTM processes would go on increasing in diverse range of applications. Presence of a large number of NTM processes along with complex characteristics and capabilities, and lack of experts in NTM process selection domain compel for development of a structured approach for NTM process selection for a given machining application. Past researchers have already attempted to solve NTM process selection problems using various complex mathematical approaches which often require a profound knowledge in mathematics/artificial intelligence from the part of process engineers. In this paper, four NTM process selection problems are solved using an integrated PROMETHEE (preference ranking organization method for enrichment evaluation and GAIA (geometrical analysis for interactive aid method which would act as a visual decision aid to the process engineers. The observed results are quite satisfactory and exactly match with the expected solutions.
Wang, Qiang; Zhang, Yongshun; Guo, Yiduo
The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.
Berthold-Pluta, Anna; Garbowska, Monika; Stefańska, Ilona; Pluta, Antoni
Bacteria of the genus Cronobacter are emerging food-borne pathogens. Foods contaminated with Cronobacter spp. may pose a risk to infants or adults with suppressed immunity. This study was aimed at determining the microbiological quality of ready-to-eat (RTE) plant-origin food products available on the Polish market with special emphasis on the prevalence of Cronobacter genus bacteria. Analyses were carried out on 60 samples of commercial RTE type plant-origin food products, including: leaf vegetables (20 samples), sprouts (20 samples) and non-pasteurized vegetable, fruit and fruit-vegetable juices (20 samples). All samples were determined for the total count of aerobic mesophilic bacteria (TAMB) and for the presence of Cronobacter spp. The isolates of Cronobacter spp. were subjected to genetic identification and differentiation by 16S rDNA sequencing, PCR-RFLP analysis and RAPD-PCR and evaluation of antibiotic susceptibility by the disk diffusion assay. The TAMB count in samples of lettuces, sprouts and non-pasteurized fruit, vegetable and fruit-vegetable juices was in the range of 5.6-7.6, 6.7-8.4 and 2.9-7.7 log CFU g -1 , respectively. The presence of Cronobacter spp. was detected in 21 (35%) samples of the products, including in 6 (30%) samples of leaf vegetables (rucola, lamb's lettuce, endive escarola and leaf vegetables mix) and in 15 (75%) samples of sprouts (alfalfa, broccoli, small radish, lentil, sunflower, leek and sprout mix). No presence of Cronobacter spp. was detected in the analyzed samples of non-pasteurized fruit, vegetable and fruit-vegetable juices. The 21 strains of Cronobacter spp. isolated from leaf vegetable and sprouts included: 13 strains of C. sakazakii, 4 strains of C. muytjensii, 2 strains of C. turicensis, one strain of C. malonaticus and one strain of C. condimenti. All isolated C. sakazakii, C. muytjensii, C. turicensis and C. malonaticus strains were sensitive to ampicillin, cefepime, chloramphenicol, gentamycin
Bhering, L L; Junqueira, V S; Peixoto, L A; Cruz, C D; Laviola, B G
The aim of this study was to evaluate different methods used in genomic selection, and to verify those that select a higher proportion of individuals with superior genotypes. Thus, F2 populations of different sizes were simulated (100, 200, 500, and 1000 individuals) with 10 replications each. These consisted of 10 linkage groups (LG) of 100 cM each, containing 100 equally spaced markers per linkage group, of which 200 controlled the characteristics, defined as the 20 initials of each LG. Genetic and phenotypic values were simulated assuming binomial distribution of effects for each LG, and the absence of dominance. For phenotypic values, heritabilities of 20, 50, and 80% were considered. To compare methodologies, the analysis processing time, coefficient of coincidence (selection of 5, 10, and 20% of superior individuals), and Spearman correlation between true genetic values, and the genomic values predicted by each methodology were determined. Considering the processing time, the three methodologies were statistically different, rrBLUP was the fastest, and Bayesian LASSO was the slowest. Spearman correlation revealed that the rrBLUP and GBLUP methodologies were equivalent, and Bayesian LASSO provided the lowest correlation values. Similar results were obtained in coincidence variables among the individuals selected, in which Bayesian LASSO differed statistically and presented a lower value than the other methodologies. Therefore, for the scenarios evaluated, rrBLUP is the best methodology for the selection of genetically superior individuals.
Chung, Stephen W C; Lam, Aaron C H
This study determined the levels of pyrrolizidine alkaloids (PAs), including their respective N-oxides, in foodstuffs available in Hong Kong by liquid chromatography-electrospray ionisation tandem mass spectrometry. A total of 234 samples (48 food items) were collected randomly from a local market and analysed. About 50% of samples were found to contain detectable amount of PAs. Amongst the 48 food items, PAs were not detected in 11 food items, including barley flour, beef, cattle liver, pork, pig liver, chicken meat, chicken liver, milk, non-fermented tea, Melissa tea and linden tea. For those found to contain detectable PAs, the summed PA content ranged up to 11,000 µg kg -1 . The highest sum of PA content among the 37 food items calculated with lower bound was cumin seed, then followed by oregano, tarragon and herbs de Provence with ranges of 2.5-11,000, 1.5-5100, 8.0-3300 and 18-1300 µg kg -1 respectively. Among the samples, the highest sum of PA content was detected in a cumin seed sample (11,000 µg kg -1 ), followed by an oregano (5100 µg kg -1 ), a tarragon (3300 µg kg -1 ) and a herbs de Provence (1300 µg kg -1 ). In general, the results of this study agreed well with other published results in peer-reviewed journals, except that the total PAs in honey and specific tea infusion in this study were comparatively lower.
Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.
In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.
Full Text Available Supplier selection is very important for a company to be successful in a globalized competitive environment. The aims at choosing the optimal supplier; to increase customer satisfaction, improve the competitive ability and continue to exist at minimal cost. This study was intended to choose the most suitable raw material supplier in a company engaged in the production of agricultural tires. In the study Analytical Hierarchy Process (AHP and Promethee I-II methods were applied in order to select the most optimal supplier to a company. In conclusion, AHP and Promethee I-II methods used by the company to determine the suppliers with the optimal supplier and brought several recommendations.
Krasnov, V.A.; Malakhov, A.I.; Savina, M.V.; Shmatov, S.V.; Zarubin, P.I.
On the basis of the HIJING generator simulation of heavy ion collisions at ultrarelativistic energy scale, a method of a fast selection of inelastic nucleus-nucleus interactions is proposed for the CMS experiment at LHC. The basic idea is to use the time coincidence of signals with resolution better than 1 ns from the two very forward calorimeter arms covering the acceptance 3<|η|<5. The method efficiency is investigated by variation of energy thresholds in the calorimeters for different colliding ion species, namely, PbPb, NbNb, CaCa, OO, pPb, pCa, pp. It is shown that a stable efficiency of event selection (∼98%) is provided in an energy threshold range up to 100 GeV for nuclear collisions at 5 TeV/nucleon in the centre of mass system. In the pp collision case the relevant efficiency drops from 93% down to 80%
Full Text Available A new methods of selecting efficient project portfolios in the presence of hybrid uncertainty has been presented. Pareto optimal solutions have been defined by an algorithm for generating project portfolios. The method presented allows us to select efficient project portfolios taking into account statistical and economic dependencies between projects when some of the parameters used in the calculation of effectiveness can be expressed in the form of an interactive possibility distribution and some in the form of a probability distribution. The procedure for processing such hybrid data combines stochastic simulation with nonlinear programming. The interaction between data are modeled by correlation matrices and the interval regression. Economic dependences are taken into account by the equations balancing the production capacity of the company. The practical example presented indicates that an interaction between projects has a significant impact on the results of calculations. (original abstract
Full Text Available In recent years, major challenges such as, increase in inflexible consumer demands and to improve the competitive advantage, it has become necessary for various industrial organizations all over the world to focus on strategies that will help them achieve cost reduction, continual quality improvement, increased customer satisfaction and on time delivery performance. As a result, selection of the most suitable and optimal facility location for a new organization or expansion of an existing location is one of the most important strategic issues, required to fulfill all of these above mentioned objectives. In order to sustain in the global competitive market of 21st century, many industrial organizations have begun to concentrate on the proper selection of the plant site or best facility location. The best location is that which results in higher economic benefits through increased productivity and good distribution network. When a choice is to be made from among several alternative facility locations, it is necessary to compare their performance characteristics in a decisive way. As the facility location selection problem involves multiple conflicting criteria and a finite set of potential candidate alternatives, different multi-criteria decision-making (MCDM methods can be effectively applied to solve such type of problem. In this paper, four well known MCDM methods have been applied on a facility location selection problem and their relative ranking performances are compared. Because of disagreement in the ranks obtained by the four different MCDM methods a final ranking method based on REGIME has been proposed by the authors to facilitate the decision making process.
personal and telephone interviews. Ten individuals from each of the four AFWAL Laboratories were interrviewed. The results illustrated that few of the...680). Aaker and Tyebee. 1978. The authors constructed a model that dealt with the selection of interdependent R&D projects. The model covers three...of this research effort. Scope * The data collection method used in this study consisted of a combination of personal and telephone interviews. The
Sunita Bansal; Srijit Biswas; S.K. Singh
A big challenge in sustainable projects is selection of appropriate construction method and is considered to be the decisive factor for its success. Many environment friendly prefabricated elements are entering into the market at an increasing pace. This has increased the workload and inquisitiveness of the stakeholders who will need information about their environmental, technical and esthetic aspects. The use of prefabrication in sustainable construction is advantageous but appropriate deci...
Zhao Heyong; Tan Kaixuan; Liu Huizhen
According to the complicated conditions and main influence factors of in-situ leaching min- ing, a model and processes of analytic hierarchy are established for evaluation and selection of in-situ leaching mining methods based on analytic hierarchy process. Taking a uranium mine in Xinjiang of China for example, the application of this model is presented. The results of analyses and calculation indicate that the acid leaching is the optimum project. (authors)
Burgess, Catherine M; Smid, Eddy J; Rutten, Ger; van Sinderen, Douwe
Abstract Background This study describes a strategy to select and isolate spontaneous riboflavin-overproducing strains of Lactobacillus (Lb.) plantarum, Leuconostoc (Lc.) mesenteroides and Propionibacterium (P.) freudenreichii. Results The toxic riboflavin analogue roseoflavin was used to isolate natural riboflavin-overproducing variants of the food grade micro-organisms Lb. plantarum, Lc. mesenteroides and P. freudenreichii strains. The method was successfully employed for strains of all thr...
Full Text Available High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.
Full Text Available "nBackground and Aims: Selecting an appropriate treatment plan is one of the most critical aspects of dental treatments. The purpose of this study was to compare multimedia system and conventional method in patients' selecting prosthetic treatment and the time consumed."nMaterials and Methods: Ninety patients were randomly divided into three groups. Patients in group A, once were instructed using the conventional method of dental office and once multimedia system and time was measured in seconds from the beginning of the instruction till the patient had came to decision. The patients were asked about the satisfaction of the method used for them. In group B, patients were only instructed using the conventional method, whereas they were only exposed to soft ware in group C. The data were analyzed with Paired-T-test"n(in group A and T-test and Mann-Whitney test (in groups B and C."nResult: There was a significant difference between multimedia system and conventional method in group A and also between groups B and C (P<0.001. In group A and between groups B and C, patient's satisfaction about multimedia system was better. However, in comparison between groups B and C, multimedia system did not have a significant effect in treatment selection score (P=0.08."nConclusion: Using multimedia system is recommended due to its high ability in giving answers to a large number of patient's questions as well as in terms of marketing.
Efficient methods for including quantum effects in Monte Carlo calculations of large systems: extension of the displaced points path integral method and other effective potential methods to calculate properties and distributions.
Mielke, Steven L; Dinpajooh, Mohammadhasan; Siepmann, J Ilja; Truhlar, Donald G
We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.
Zhytar Maksym O.
Full Text Available The goal of the article lies in identification of a methodical approach to selection of the investment strategy of banks on the basis of factors of its formation with the use of the hierarchy analysis method. Factors of formation of the bank’s investment strategy were identified in the result of the study. The article demonstrates that selection of the investment strategy of the bank could be efficiently realised on the basis of the hierarchy analysis method, which is the most popular under conditions of a multi-criteria assessment of the search for optimal solution of the set task. The article offers a hierarchical structure of decision making, which could be a basis of selection of the bank’s investment strategy with consideration of the institutional flexibility. The prospect of further study in this direction is development of an optimisation model of the bank’s investment portfolio with consideration of not only institutional, but also market flexibility of decision making.
Kim, Hee Eun; Kim, Bo Gyung; Kang, Hyun Gook
In the Korea Nuclear I and C Systems (KNICS) project, the software for the fully-digitalized reactor protection system (RPS) was developed under a strict procedure. Even though the behavior of the software is deterministic, the randomness of input sequence produces probabilistic behavior of software. A software failure occurs when some inputs to the software occur and interact with the internal state of the digital system to trigger a fault that was introduced into the software during the software lifecycle. In this paper, the method to select test set for software failure probability estimation is suggested. This test set reflects past input sequence of software, which covers all possible cases. In this study, the method to select test cases for software failure probability quantification was suggested. To obtain profile of paired state variables, relationships of the variables need to be considered. The effect of input from human operator also have to be considered. As an example, test set of PZR-PR-Lo-Trip logic was examined. This method provides framework for selecting test cases of safety-critical software
Full Text Available Wind turbine anomaly or failure detection using machine learning techniques through supervisory control and data acquisition (SCADA system is drawing wide attention from academic and industry While parameter selection is important for modelling a wind turbine’s condition, only a few papers have been published focusing on this issue and in those papers interconnections among sub-components in a wind turbine are used to address this problem. However, merely the interconnections for decision making sometimes is too general to provide a parameter list considering the differences of each SCADA dataset. In this paper, a method is proposed to provide more detailed suggestions on parameter selection based on mutual information. First, the copula is proven to be capable of simplifying the estimation of mutual information. Then an empirical copulabased mutual information estimation method (ECMI is introduced for application. After that, a real SCADA dataset is adopted to test the method, and the results show the effectiveness of the ECMI in providing parameter selection suggestions when physical knowledge is not accurate enough.
Full Text Available One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity.
Full Text Available delivery strategy and a testing strategy with the associated approach and methods for each strategy. For example, the testing strategy will describe the testing approach of the system development cycle and it will include methods of testing, testing...
In time, phenomenology has become a viable approach to conducting qualitative studies in education. Popular and well-established methods include descriptive and hermeneutic phenomenology. Based on critiques of the essentialism and receptivity of these two methods, however, this article offers a third variation of empirical phenomenology:…
Full Text Available Machine selection is an important and difficult process for the firms, and its results may generate more problems than anticipated. In order to find the best alternative, managers should define the requirements of the factory and determine the necessary criteria. On the other hand, the decision making criteria in order to choose the right equipment may vary according to the type of the manufacturing facility, market requirements, and consumer assigned criteria. This study aims to find the best machine alternative among the three machine offerings according to twelve evaluation criteria by integrating entropy method with SAW method.
Li, Meng; Jiang, Li-hui; Xiong, Xing-long
Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.
Perret, Amandine; Reynard, Emmanuel
stages of the glacial retreat since the Last Glacial Maximum. From a spatial point of view, the objective was to show the different types of glacial remnants, but also some landforms related to deglaciation processes. Finally, 32 glacial and associated geosites were selected. Each geosite was submitted to a full evaluation process, including basis information, description, explanation of morphogenesis and an evaluation of values assigned to geosites. This assessment, first qualitative, provided valuable information concerning their intrinsic interest and their management. A numerical evaluation was also assessed to classify geosites and define an order of priority for their touristic promotion. It is worth to be noted that each selected points of interest can in fact be qualified as a geosite, using a clear method of selection. In this study, the numerical evaluation is not a mean to select geosites but a way to rank one geosite to another. Some geosites can be abandoned if intrinsic values are too low. Despite a well-defined protocol, the subjectivity and authors' choices are part of the selection process and inventory. This fact is certainly not a weakness. It must be considered whenever such inventory is made. Reference Martin, S. (2012). Valoriser le géopatrimoine par la médiation indirecte et la visualisation des objets géomorphologiques (Thèse de doctorat). Université de Lausanne, Lausanne. Reynard E., Fontana G., Kozlik L., Scapozza C. (2007). A method for assessing the scientific and additional values of geomorphosites, Geographica Helvetica, 62(3), 148-158. Reynard, E., Perret, A., Grangier, L., & Kozlik, L. (2012). Methodological approach for the assessment, protection, promotion and management of geoheritage. EGU General Assembly, Vienna.
Beal, D.; McIlvaine , J.; Fonorow, K.; Martin, E.
This document illustrates guidelines for the efficient installation of interior duct systems in new housing, including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces. This document illustrates guidelines for the efficient installation of interior duct systems in new housing. Interior ducts result from bringing the duct work inside a home's thermal and air barrier. Architects, designers, builders, and new home buyers should thoroughly investigate any opportunity for energy savings that is as easy to implement during construction, such as the opportunity to construct interior duct work. In addition to enhanced energy efficiency, interior ductwork results in other important advantages, such as improved indoor air quality, increased system durability and increased homeowner comfort. While the advantages of well-designed and constructed interior duct systems are recognized, the implementation of this approach has not gained a significant market acceptance. This guideline describes a variety of methods to create interior ducts including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces. As communication of the intent of an interior duct system, and collaboration on its construction are paramount to success, this guideline details the critical design, planning, construction, inspection, and verification steps that must be taken. Involved in this process are individuals from the design team; sales/marketing team; and mechanical, insulation, plumbing, electrical, framing, drywall and solar contractors.
Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian
Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.
Loustau, Marie-Therese; Verhoog, Roelof; Precigout, Claude
A method of bonding a metal connection to an electrode including a core having a fiber or foam-type structure for an electrochemical cell, in which method at least one metal strip is pressed against one edge of the core and is welded thereto under compression, wherein, at least in line with the region in which said strip is welded to the core, which is referred to as the "main core", a retaining core of a type analogous to that of the main core is disposed prior to the welding.
Tucker, Joseph W; Chenard, Lois; Young, Joseph M
A sulfur-functionalized aminoacrolein derivative is used for the efficient and selective synthesis of heterocyclic sulfonyl chlorides, sulfonyl fluorides, and sulfonamides. The development of a 3-step parallel medicinal chemistry (PMC) protocol for the synthesis of pyrazole-4-sulfonamides effectively demonstrates the utility of this reagent. This reactivity was expanded to provide rapid access to other heterocyclic sulfonyl fluorides, including pyrimidines and pyridines, whose corresponding sulfonyl chlorides lack suitable chemical stability.
Kang, Suyeon; Song, Jongwoo
A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.
Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu
Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the