# MICAI-2011 Accepted Papers

Special session: Polibits

Special session: CPS

Special session: RCS

The papers are listed in no particular order

# Main session: Springer LNAI

Aram González and Jorge Ramírez Uresti. Strategy patterns prediction model (SPPM)
Abstract: Multi-agent systems are broadly known for being able to simulate real-life situations which require the interaction and cooperation of individuals. Opponent modeling can be used along with multi-agent systems to model complex situations such as competitions like soccer games. In this paper, a model for predicting opponent moves is presented. The model is based around an offline step (learning phase) and an online one (execution phase). The offline step is the one that gets and analyses previous experiences while the online step is the one that uses the data generated by offline analysis to predict opponent moves. This model is illustrated by an experiment with the RoboCup 2D Soccer Simulator.
Evaluating Probabilistic Models Learned from Data
Abstract: Several learning algorithms have been proposed to construct probabilistic models from data using the Bayesian networks mechanism. Some of them permit the participation of human experts in order to create a knowledge representation of the domain. However, multiple different models may result for the same problem using the same data set. This paper presents the experiences in the construction of a probabilistic model that conforms a viscosity virtual sensor. Several experiments have been conduced and several different models have been obtained. The analysis of the models and the conclusions identified are included in this paper.
A GRASP with Strategic Oscillation for a  Commercial Territory Design Problem with a Routing Budget Constraint
Abstract: This paper addresses a commercial districting problem arising in the bottled beverage distribution industry. The problem consists of grouping a set of city blocks  into territories so as to maximize territory compactness. As planning requirements, the grouping seeks to balance both number of customers  and product demand across territories,  maintain connectivity of territories, and  limit the total cost of routing. A combinatorial optimization model for this problem is introduced.  Work on commercial territory design has particularly focus on design decisions. This work is, to the best of our knowledge, the first to address both design and routing decisions simultaneously by considering a budget constraint on the total routing cost in commercial territory design. A greedy randomized adaptive search procedure (GRASP) that incorporates advanced features such as adaptive memory and strategic oscillation is developed. Empirical evidence over a wide set of  randomly generated instances based on real-world data show  a very positive impact of these advanced components. It was observed how these strategies yield feasible solutions, which is something hard to achieve when these componentes are not considered. Solution quality is significantly improved as well.
Dante Mujica-Vargas and Francisco Javier Gallegos-Funes. Robust RML estimator - Fuzzy C-Means clustering algorithms for noisy image segmentation
Abstract: Image segmentation is a key step for many images analysis applications. So far, there does not exist a general method to segment suitable all images, regardless if these are corrupted or free noise. In this paper, we propose to modify the fuzzy c-means clustering algorithm and the FCM_S1 variant by using the RML estimator. The idea to our method is to get robust clustering algorithms able to segment images with different type and levels of noises. The performance of the proposed algorithms is tested on synthetic and real images. Experimental results show that the proposed algorithms are more robust to the noise presence and more effective than the comparative algorithms.
Two steps Individuals Travel Behavior Modeling through Fuzzy Cognitive Map Pre-definition and Learning
Abstract: Transport (management and behavior) modeling appears in modern societies because of the importance for all social and economic process. Using in this field advanced computer techniques like the Artificial Intelligence is really relevant from the scientific, economic and social point of view. This paper deals with Fuzzy Cognitive Maps as an approach in representing the behavior and operation of such complex systems. Two steps are presented, an initial modeling trough Automatic Knowledge (Engineering and Formalizing), and secondly using readjustment parameters with an inspired on Particle Swarm Optimization learning method. The theoretical results come from necessities in a real case study that is also presented, showing then the practical approach of the proposal, where new issues were obtained but also problems were solved.
Partha Pakray, Utsab Barman, Sivaji Bandyopadhyay and Alexander Gelbukh. A Statistics-Based Semantic Textual Entailment System
Abstract: A Textual Entailment (TE) recognition system that uses semantic features has been described in this paper. The semantic features are based on the Universal Networking Language (UNL). The proposed TE system compares the UNL relations in both the text and the hypothesis to arrive at the two-way entailment decision. The system has been separately trained on each development corpus released as part of the Recognizing Textual Entailment (RTE) competitions RTE-1, RTE-2, RTE-3 and RTE-5 and tested on the respective RTE test sets. No separate development data was released in RTE-4. The evaluation results on each test set are compared with the RTE systems that participated in the respective RTE competitions with semantic approach.
Loreto Gonzalez-Hernandez, Jose Torres-Jimenez and Nelson Rangel-Valdez. An Exact Approach to Maximize the Number of Wild Cards in a Covering Array
Abstract: Covering Arrays CA(N;t,k,v) are combinatorial structures that can be used to define adequate test suites for software testing. The smaller a CA is, the smaller the number of test cases that will be given to test the functionality of a software component in order to identify possible failures. Due to the fact that the construction of CAs of optimal size is a highly combinatorial problem, several approximated strategies have been developed. Some constructions of these strategies can be further improved through a post optimization process. For example, the wild card profile of a CA is the set of symbols that can be modified without changing the properties that define a CA. It has been shown that some CAs can be reduced by merging rows that contain wild cards. This paper presents a Branch and Bound (B&B) strategy that maximizes the number of wild cards in the profile of an already constructed CA. We identify such profiles in 100 CAs of strength t=2 and alphabets v from 6 to 25. Also, it is shown that for an specific CA(42;2,8,6) different profiles can be obtained; such profiles vary in the number of wild cards and its distribution in the CA.
Fernando David Ramirez-Figueroa and Alfredo Victor Mantilla-Caeiros. Hybrid Intelligent Speed Control of Induction Machines using Direct Torque Control
Abstract: This paper presents a novel hybrid adaptive fuzzy controller for the regulation of speed on induction machines with direct torque control. The controller is based on a fuzzy system, and PID control with decoupled gains. Genetic programming techniques are used for offline optimizations of the normalization constants of fuzzy membership function ranges. Fuzzy cluster means is introduced for online optimization on the limits of triangular fuzzy membership functions. Finally simulations in LabVIEW are presented validating the response of the controller with and without load on the machine; results and conclusions are discussed.
July Andrea Galeano Zea, Romuald Jolivot and Franck Marzani. Analysis of Human Skin Hyper-Spectral Images by Non-negative Matrix Factorization
Abstract: This article presents the use of Non-negative Matrix Factorization, a blind source separation algorithm, for the decomposition of human skin absorption spectra in its main pigments: melanin and hemoglobin. The evaluated spectra come from a Hyperspectral Image, which is the result of the processing of a series of Multispectral Images by a neural network-based algorithm. The implemented source separation algorithm is based on a multiplicative coefficient upload. The goal is to represent a given spectrum as the weighted sum of two spectral components. The resulting weighted coefficients are used to quantify the melanin and hemoglobin content in the given spectra. The results present a degree of correlation higher than 90% compared to theoretical hemoglobin and melanin spectra. This methodology is validated on 35 melasma lesions from a population of 10 subjects.
Yaima Filiberto Cabrera, Rafael Bello Pérez, Yailé Caballero Mota and Gonzalo Ramos Jimenez. Improving the MLP learning by using a method to calculate the initial weights of the network based on the quality of similarity measure.
Abstract: This works presents a technique that integrates backpropagation learning method with a method to calculate the initial weights in order for training the Multilayer Perceptron model. The method to calculate the initial weights of the MLP is based on the quality of similarity measure proposed on the framework of the Rough set theory. Experimental results show that the proposed initialization method performs much better than the conventional random initialization method, so it is an interesting alternative to the conventional random initialization.
Laia Subirats and Luigi Ceccaroni. An ontology for computer-based decision support in rehabilitation
Abstract: Although functionality and disease classifications are available thanks to initiatives such as the “international classification of functioning, disability and health”, the “systematized nomenclature of medicine - clinical terms” and the “international classification of diseases”, a formal model of rehabilitation interventions has not been defined yet. This model can have a fundamental role in the design of computer-based decision support in rehabilitation. Some initiatives such as the “international classification of health interventions” are in development, but their scope is overly general to cope with the specificities that characterize rehabilitation. The aim of this work is to formalize knowledge in order to do diagnosis and personalization of activities in cases of people with functional diversity. To formally define the diagnosis and activity personalization, a methodology has been developed to extract standardized concepts from clinical scales and the literature.
Fernando Rechy-Ramírez, Hector-Gabriel Acosta-Mesa, Efren Mezura-Montes and Nicandro Cruz-Ramírez. Time Series Discretization Using Evolutionary Programming
Abstract: In this work, we present a novel algorithm for time series discretization. Our approach includes the optimization of the word size and the alphabet as one parameter. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classication, the complexity measured as the number of dierent strings needed to represent the complete dataset, and the compression rate assessed as the length of the discrete representation. Our proposal is compared with some of the most representative algorithms found in the specialized literature, tested in a well-known benchmark of time series data sets. The statistical analysis of the classication accuracy shows that the overall performance of our algorithm is highly competitive.
Angel Kuri-Morales, Luis Enrique Cortes-Berrueco and Daniel Trejo. Clustering of Heterogeneously Typed Data with Soft Computing - A case Study
Abstract: The problem of finding clusters in arbitrary sets of data has been attempted using different approaches. In most cases, the use of metrics in order to determine the adequateness of the said clusters is assumed. That is, the criteria yielding a measure of quality of the clusters depends on the distance between the elements of each cluster. Typically, one considers a cluster to be adequately characterized if the elements within a cluster are close to one another while, simultaneously, they appear to be far from those of different clusters. This intuitive approach fails if the variables of the elements of a cluster are not amenable to distance measurements, i.e., if the vectors of such elements cannot be quantified. This case arises frequently in real world applications where several variables (if not most of them) correspond to categories. The usual tendency is to assign arbitrary numbers to every category: to encode the categories. This, however, may result in spurious patterns: relationships between the variables which are not really there at the offset. It is evident that there is no truly valid assignment which may ensure a universally valid numerical value to this kind of variables. But there is a strategy which guarantees that the encoding will, in general, not bias the results. In this paper we explore such strategy. We discuss the theoretical foundations of our approach and prove that this is the best strategy in terms of the statistical behavior of the sampled data. We also show that, when applied to a complex real world problem, it allows us to generalize soft computing methods to find the number and characteristics of a set of clusters. We contrast the characteristics of the clusters gotten from the automated method with those of the experts.
Instance Selection based on the Silhouette Coefficient Measure for Text Classification
Abstract: Automated Classification is essentially a supervised machine learning technique that can be achieved by training a classifier based on instances described by a set of features. In automated text classification, a basic challenge is to determine the relevant set of instances among the pool of instances, all of which may not be useful enough to train the classifier. Also, another problem is to adopt a feature selection strategy that will efficiently and accurately represent the training instances. In our work we have proposed an easy method that will help deal with the problem of instance selection such that the selected set of instances automatically contains useful features to better represent the training set. In addition, we also accomplish the advantage of having less runtime along with better accuracy.
Vladimir Lukin, Nikolay Ponomarenko, Andrey Kurekin and Oleksiy Pogrebnyak. Processing and Classification of Multichannel Remote Sensing Data
Abstract: Several main practical tasks, important for effective pre-processing of multichannel remote sensing (RS) images, are considered in order to reliably retrieve useful information from them and to provide availability of data to potential users. First, possible strategies of data processing are discussed. It is shown that one problem is to use more adequate models to describe the noise present in real images. Another problem is automation of all or, at least, several stages of data processing, like determination of noise type and its statistical characteristics, noise filtering and image compression before applying classification at the final stage. Second, some approaches that are effective and are able to perform well enough within automatic or semi-automatic frameworks for multichannel images are described and analyzed. The applicability of the proposed methods is demonstrated for particular examples of real RS data classification
Edwin R. Garcia, Larysa Burtseva, Margarita Stoytcheva and Félix F. González-Navarro. Predicting the behavior of the interaction of acetylcholine, the pH and temperature on an acetylcholinesterase biosensor
Abstract: The steady-state current response of an acetylcholinesterase electrochemical sensor of second generation resulting from the interaction of pH, temperature and substrate concentration was evaluated to improve biosensor’s analytical characteristics using computational learning models. Neural networks and support vector machines demonstrated excellent results, despite of the limited number of samples. The predictions provided by both models were compared in order to determine which generalizes better the response generated by the acetylcholinesterase sensor signal.
Miguel Murguía-Romero, Rafael Villalobos-Molina, Adolfo René Méndez-Cruz and Rafael Jiménez-Flores. Heuristic search of cut-off points for clinical parameters: Defining the limits of obesity
Abstract: We studied the variability of obesity in a sample of 3,176 young Mexicans (17-24 years old) measured through the waist circumference. According to the American Heart Association, obesity is one of the five clinical alterations to define the MS; the other four are low levels of HDL cholesterol, and high values of triglycerides, glucose, and blood pressure. It has been proposed a cut-off point of 80cm for women and 90cm for men to define a normal or alterated value of waist circumference for Mexicans. We assume that the waist circumference in healthy population has a normal distribution, so a monolithic cut-off point is only an upper limit for normal values. The objective of this work is to estimate the subjacent normal distribution of the waist circumference of healthy people, involving in the analysis to the other four components of the MS, and approaching the problem as a combinatory one. We define a combination of cut-off points for the other for components of the MS; considering a set of 50 cut-off point candidates for each of the five parameters, a searching space of 505 (more than 300 millions of combinations) is defined. Each particular combination of cut-off points (excluding waist circumference) defines a subpopulation which parameter values fall into normal ranges so defined; for each subpopulation we calculated the histogram of the waist circumference values. Using a heuristic function involving the symmetry value of the histogram (skewness), we applied a ‘best first search’ on the combination of cut-off points. We found a combination of cut-off point values that generates the more symmetrical histogram, and that we proposed as useful criterion to set cut-off points of MS parameters for Mexicans. Finally, the histogram defined by the solution is proposed as the normal distribution of healthy population, and representing the variability of the waist circumference of non-obese young Mexicans.
Eugene Levner, Amir Elalouf and Edwin Cheng. Computing Mobile Agent Routes with Node-Wise Constraints in Distributed Communication Systems
Abstract: A basic problem in the quality-of-service (QoS) analysis of multi-agent distributed systems is to find optimal routes for mobile agents that incrementally fuse the data as they visit hosts in the distributed system. The system is modeled by a directed acyclic graph in which nodes represent hosts and edges represent links between them. Each edge is assigned cost (or benefit) and weights that represent link delay, reliability, or other QoS parameters. The agent scheduling problems are viewed as constrained routing problems in which a maximum-benefit (or minimum-cost) route connecting source to destination and subject to QoS constraints is to be found. We study approximation algorithms called ‘fully polynomial time approximation schemes’ (FPTAS) for solving these problems. We suggest an accelerating technique that permits to improve known FPTAS, e.g., Hassin’s (1992); Camponogara & Shima’s (2010); and Elalouf et al. (2011) algorithms, and obtain new FPTASs.
Diagnosis in Sonogram of Gall Bladder
Abstract: This paper describes the development and testing of a diagnostic system using sonograms. In far flung areas of under developing countries where the availability of specialists is a problem and sometimes not even a possibility, it is highly beneficial to have on site diagnosis computer application to support medical staff. Diagnose of sonograms to identify infected part in offline scenarios is not always easy. Besides, lack of infrastructure does not permit online solutions to be a practical option. We implement a system named Intelligent-Eye (I-Eye) which employs imaging and diagnostic techniques to support in the gallbladder diagnosis, saving time and cost of operating medical procedures. We implemented an algorithm highly capable of being used on different diagnostic ultrasonography machines, generating accurate information reports and diagnosis.
Sentiment Analysis of Urdu Language: Handling Phrase-Level Negations
Abstract: The paper investigates and proposes the treatment of the effect of the phrase-level negations on the sentiment analysis of the Urdu text based reviews. These negations act as the valence shifters and flip or switch the inherent sentiments of the subjective terms in the opinionated sentences. The presented approach focuses on the subjective phrases called the SentiUnits, which are made by the subjective terms (adjectives), their modifiers, conjunctions, and the negations.  The final effect of these phrases is computed according to the given model.  The analyzer takes one sentence from the given review, extracts the constituent SentiUnits, compute their overall effect (polarity) and then compute the final sentence polarity.  Using this approach the effect of negations is handled within the subjective phrases. Despite of the fact that we have applied our approach on a least explored and a resource poor language: Urdu, the results of the experiments are quite encouraging.
Hamid Parvin. An Ensemble Based Approach for Feature Selection
Abstract: In this paper we propose an ensemble based approach for feature selection. We aim at overcoming the problem of parameter sensitivity of feature selection approaches. To do this we employ ensemble method. We get the results per different possible threshold values automatically in our algorithm. For each threshold value, we get a subset of features. We give a score to each feature in these subsets. Finally by use of ensemble method, we select the features which have the highest scores. This method is not a parameter sensitive one, and also it has been shown that using the method based on the fuzzy entropy results in more reliable selected features than the previous methods'. Empirical results show that although the efficacy of the method is not considerably decreased in most of cases (or it is even increased the performance in the most cases), the method becomes free from setting of any parameter.
Hamid Parvin. A New Space Defined by Ant Colony Algorithm to Partition Data
Abstract: To reach a robust partition, ensemble-based learning is always a very promising option. There is straightforward way to generate a set of primary partitions that are different from each other, and then to aggregate the partitions via a consensus function to generate the final partition. Another alternative in the ensemble learning is to turn to fusion of different data from originally different sources. In this paper we introduce a new ensemble learning based on the Ant Colony clustering algorithm. Experimental results on some real-world datasets are presented to demonstrate the effectiveness of the proposed method in generating the final partition
Francisco Madrigal, Jean-Bernard Hayet and Mariano Rivera. Multiple target tracking with motion priors
Abstract: This paper presents a hybrid, particle filter-based approach for multiple target tracking in video streams in single static cameras settings. We aim in particular to manage situations where mid-dense crowds are observed, and  where, although tracking is possible, it is made complicated by the presence of frequent occlusions among targets and with scene clutter. Moreover, the appearance of targets is sometimes very similar, which makes standard trackers often switch  their target identity. Our contribution is two-fold~: (1) we first propose an estimation scheme for motion priors in the camera field of view, that integrates sparse optical flow data and regularizes the corresponding discrete distribution fields on velocities directions and amplitudes; (2) we use these motion priors in a hybrid proposal distribution for a particle filter scheme in charge of the target estimation. Through several results on video-surveillance datasets, we show the pertinence of this approach.
Heidy M. Marin-Castro, Victor J. Sosa-Sosa and Ivan Lopez-Arevalo. Automatic Identification of Web Query Interfaces
Abstract: The amount of information contained in databases in the Web has grown explosively in the last years. This information, known as the Deep Web, is dynamically obtained from specific queries to these databases through Web query interfaces (WQIs). The problem of finding and accessing databases in the Web is a great challenge due to the Web sites are very dynamic and the information existing is heterogeneous. Therefore, it is necessary to create efficient mechanisms to access, extract and integrate information contained in databases in the Web. Since WQIs are the only means to access databases in the Web, the automatic identification of WQIs plays an important role facilitating traditional search engines to increase the coverage and access interesting information not available on the indexable Web. In this paper we present a strategy for automatic identification of WQIs using supervised learning and making an adequate selection and extraction of HTML elements in theWQIs to form the training set.We present two experimental tests over a corpora of HTML forms considering positive and negative examples. Our proposed strategy achieves better accuracy than previous works reported in the literature.
Sara Elena Garza Villarreal and Ramon Brena. Topic mining based on graph local clustering
Abstract: This paper introduces an approach for discovering thematically related document groups (a topic mining task) in massive document collections with the aid of graph local clustering. This can be achieved by representing a collection as a directed graph where vertices are given by documents and arcs are given by connections among these (e.g., hyperlinks in the case of a Web collection). Because a document is likely to have more connections to other documents of the same theme, we have assumed---given this graph representation---that topics have the structure of a graph cluster, i.e. a group of vertices with more arcs to the inside of the group and fewer arcs to the outside of it. So, topics could be discovered by clustering the document graph; to cope with scalability and overlapping clusters, we use a local approach in the graph, which maximizes cohesion by exploring the neighborhoods of different sub-graph starting vertices. Apart from discovering these topical document clusters, we also extract properties (keywords and most representative documents) from them in order to provide a summary of the topic. Our approach was tested over the Wikipedia collection and we observed---by different means---that the resulting clusters in fact correspond to cohesive topical document groups, which leads us to conclude that topic mining can be treated as a graph clustering problem.
Contextual Semantic Processing for a Spoken Dialogue Sysem with Markov Logic
Abstract: Semantic processing is vital in a dialogue system for the language understanding stage. Recent approaches of semantic processing rely on machine learning methods to perform the task. These are more robust to speech recogniser errors, although these approaches are build on the domain of the dialogue system they do not incorporate contextual information available in the dialogue system. In this paper, we explore the use of contextual information in the form of expectations of a dialogue system to perform semantic processing in a \emph{Spoken Dialogue System}.  We show the benefits on doing so, and propose a Markov Logic model which incorporates such information.
Alberto Pastrana Palma, Juan Francisco Reyes Muñoz, Denise Gómez Hernández, Luis Rodrigo Valencia Pérez, Juan Manuel Peña Aguilar and Alberto Lamadrid Álvarez. Computer Assisted Diagnosis of Microcalcifications in Mammographies
Abstract: Computer Assisted Diagnosis (CAD) is rapidly reaching worldwide acceptance in different fields of medicine. Particularly, CAD has found one of its main applications in breast cancer diagnosis where the detection of microcalcifications in women breasts is typically associated with the presence of cancer. In this paper, a method for automatic breast contour detection is presented as a pre-processing step for microcalcification detection. Then, a combination of  scale-space algorithms are used to locate candidate regions of microcalcifications and a significant percentage of false positives are finally discriminated via thresholding. Detected regions using this method have been found to describe 91.6% of microcalcifications from the MIAS database.
José Carlos Ortiz-Bayliss, Hugo Terashima-Marín, Ender Özcan, Andrew J. Parkes and Santiago Enrique Conant-Pablos. A Local Improvement Approach to Produce Variable and Value Ordering Decision Matrix Hyper-heuristics
Abstract: Constraint Satisfaction Problems (CSP) represent an important topic of study because of their many applications in different areas of artificial intelligence and operational research. When solving a CSP, the order in which the variables are selected to be instantiated and the order of the values to be tried affect the complexity of the search. Hyper-heuristics are flexible methods that provide generality when solving different problems and, within CSP, they can be used to determine the next variable and value to try. They select from a set of low-level heuristics and decide which one to apply at each decision point according to the problem state. This study explores a hyper-heuristic model for variable and value ordering within CSP based on a decision matrix hyper-heuristic that is constructed by going into a local improvement method that changes small portions of the matrix. The results suggest that the approach is able to combine the strengths of different low-level heuristics to perform well on a wide range of instances and compensate for their weaknesses on specific instances.
Andrej Chovanec and Roman Barták. On Generating Templates for Hypothesis in Inductive Logic Programming
Abstract: Inductive logic programming is a subfield of machine learning that uses first-order logic as a uniform representation for examples and hypothesis. In its core form, it deals with the problem of finding a hypothesis that covers all positive examples and excludes all negative examples. The coverage test and the method to obtain a hypothesis from a given template have been efficiently implemented using constraint satisfaction techniques. In this paper we suggest a method how to efficiently generate the template by remembering a history of generated templates and using it when adding predicates to a new candidate template. This method significantly outperforms the existing method based on brute-force incremental extension of the template
Jose Eduardo Ochoa-Luna, Kate Revoredo and Fabio Gagliardi Cozman. Learning Probabilistic Description Logics: A Framework and Algorithms
Abstract: Description logics have become a prominent paradigm in knowledge representation (particularly for the Semantic Web), but they typically do not include explicit representation of uncertainty. In this paper, we propose a framework for automatically learning a Probabilistic Description Logic knowledge base from data. We argue that one must learn both concept definitions and probabilistic assignments. We also propose algorithms that do so and evaluate these algorithms on real data.
Diego Cueva, Rafael Gonçalves, Fábio Cozman and Marcos Barretto. Crawling to Improve Multimodal Emotion Detection
Abstract: This paper demonstrates multimodal fusion of emotion sensory data in realistic scenarios of relatively long human-machine interactions. Fusion, combining voice and facial expressions, has been enhanced with semantic information retrieved from Internet social networks, resulting in more accurate determination of the conveyed emotion.
Jair Cervantes, Asdrúbal Lopez and Farid Garcia. A Fast SVM Training Algorithm Based on a Decision Tree Data Filter
Abstract: In this paper, we present a new algorithm to speed up the training time of Support Vector Machines (SVM). SVM has some important properties like solid mathematical background and a better generalization capacity than other classification techniques. However, the major drawback of SVM occurs in its training phase, which is computationally expensive and highly dependent on the size of input data set. The proposed algorithm uses a data filter to reduce the input data set to train a SVM. The data filter is based on an induction tree which effectively reduces the training data set for SVM, producing a very fast and high accuracy algorithm. According to the results, the algorithm produces results in a faster way than existing SVM implementations (SMO, LIBSVM and Simple-SVM) with similar accuracies.
Hiroyuki Kido and Katsumi Nitta. Toward Justifying Actions with Logically and Socially Acceptable Reasons
Abstract: This paper formalizes argument-based reasoning for actions supported by believable reasons in terms of nonmonotonic consequences and desirable reasons in terms of Pareto optimality and maximizing social welfare functions. Our unified approach gives a four-layer practical argumentation framework structured with a propositional modal language with defaults and defeasible inference rules associated with practical reasoning. We show that the unified argument-based reasoning justifies an argument whose conclusion is supported by Pareto optimal, social welfare maximizing and nonmonotonic consequence reasons. Our formalization contributes to extend argument-based reasoning so that it can formally combine reasoning about logical believability and social desirability by benefiting from economic notions.
Laura Zavala, Michael Huhns and Angelica Garcia. A Model for the Study of Dependences among Collaborative Agents
Abstract: As computing becomes pervasive, there is increasing opportunities for the use of collaborative multiagent systems that make use of multiple sources of knowledge for validation, accuracy, and reliability improvement purposes on different tasks. The integration process is critical in order to achieve a synergistic use of the different sources of knowledge and a key element in this process is accounting for the degree of confidence of each integrated contribution. Confidence determination is often underestimated by the use of ad-hoc techniques for integrating contributions based on agent reputation models and the assumption of independence among the agents (e.g., majority voting and weighted majority voting techniques). In this paper, we study the influence that dependence-based confidence determination might have on the results provided by a group of collaborative agents. We present a domain-independent model for representing dependences among agents. We show that it is theoretically possible to obtain higher accuracy than that obtained under the assumption of independence among the agents. Our model also permits to empirically evaluate the effectiveness of a collaborative multiagent system in the presence of dependences among the agents, and to analyze the effects of incorrect confidence integration assumptions. Finally, we discuss about the relation of our work to trust networks and how trust knowledge can be used into our model, as well as how our model can be used in trust networks.
Benito Camiña, Raul Monroy, Luis A. Trejo and Erika Sanchez. Towards Building a Masquerade Detection Method Based on User File System Navigation
Abstract: Given that information is an extremely valuable asset, it is vital to timely detect whether one's computer (session) is being illegally seized by a masquerader. Masquerade detection has been actively studied for more than a decade, especially after the seminal work of Schonlau's group, who suggested that, to profile a user, one should model the history of the commands she would enter into a UNIX session. Schonlau's group have yielded a masquerade dataset, which has been the standard for comparing masquerade detection methods. However, the performance of these methods is not conclusive, and, as a result, research on masquerade detection has resorted to other sources of information for profiling user behaviour. In this paper, we show how to build an accurate user profile by looking into how the user structures her own file system and how she navigates such structure. While preliminary, our results are encouraging and suggest a number of ways in which new methods can be constructed.
Luis G. Martínez, Juan R. Castro, Guillermo Licea and Antonio R. Diaz. Assessment of Uncertainty in the Projective Tree Test using an ANFIS learning approach
Abstract: In psychology projective tests are interpretative and subjective obtaining results based on the eye of the beholder, they are widely used because they yield rich and unique data and are very useful. Because measurement of drawing attributes have a degree of uncertainty it is possible to explore a fuzzy model approach to better assess interpretative results. This paper presents a study of the tree projective test applied in software development teams as part of RAMSET’s (Role Assignment Methodology for Software Engineering Teams) methodology to assign specific roles to work in the team; using a Takagi-Sugeno-Kang (TSK) Fuzzy Inference System (FIS) and also training data applying an ANFIS model to our case studies we have obtained an application that can help in role assignment decision process recommending best suited roles for performance in software engineering teams.
Alejandro Rosales-Pérez, Carlos A. Reyes-García, Pilar Gómez-Gil, Jesus A. Gonzalez and Leopoldo Altamirano. Genetic Selection of Fuzzy Model for Acute Leukemia Classification
Abstract: Leukemia is a disease characterized by an abnormal increase of white blood cells. This disease is divided into two types: lymphoblastic and myeloid, each of which is divided in subtypes. Differentiating the type and subtype of acute leukemia is important in order to determine the correct type of treatment to be assigned by the affected person. Diagnostic tests available today, such as those based on cell morphology, have a high error rate. Others, as those based on cytometry or microarray, are expensive. In order to avoid those drawbacks this paper proposes the automatic selection of a fuzzy model for accurate classiﬁcation of types and subtypes of acute leukemia based on cell morphology. Our experimental results reach up to 93.52% in classiﬁcation of acute leukemia types, 87.36% in lymphoblastic subtypes and 94.42% in myeloid subtypes. Our results show a signiﬁcant improvement compared with classiﬁers which parameters were manually tuned using the same data set. Details of the proposed method, as well as experiments and results are shown.
Yadira Quiñonez, Darío Maravall and Javier De Lope. Stochastic Learning Automata for Self-Coordination in Heterogeneous Multi-Tasks Selection in Multi-Robot Systems
Abstract: This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-election of heterogeneous specialized tasks by autonomous robots, as opposed to the usual multi-tasks allocation problem in multi-robot systems in which an external controller distributes the existing tasks among the individual robots. In this work we are considering a specifically distributed or decentralized approach in which we are particularly interested on decentralized solution where the robots themselves autonomously and in an individual manner, are responsible of selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario and we propose a solution through automata learning-based probabilistic algorithm, to solve the corresponding multi-tasks distribution problem. The paper ends with a critical discussion of experimental results.
Felix Calderon and Carlos Junez. Regularization with Adaptive Neighborhood Condition for Image Denoising
Abstract: Image denoising by minimizing a similarity of neighborhood-based cost function is presented. This cost function consists of two parts, one related to data fidelity and the other is a structure preserving smoothing term. The latter is controlled by a weight coefficient that measures the neighborhood similarity between two pixels and attaching an additional term penalizes it. Unlike most work in noise removal area, the weight of each pixel within the neighborhood is not defined by a Gaussian function. The obtained results show a good performance of our proposal, compared with some state-of-the-art algorithms.
Felix Emilio Luis-Pérez, Raúl Cruz-Barbosa and Gabriela Álvarez-Olguin. Regional Flood Frequency Estimation for the Mexican Mixteca Region  by Clustering Techniques
Abstract: Regionalization methods can help to transfer information from gauged catchments to ungauged river basins. Finding homogeneous regions is crucial for regional flood frequency estimation at ungauged sites. As it is the case for the Mexican Mixteca region site, where actually only one gauging station is working at present.  One way of delineate these homogeneous watersheds into natural groups is by clustering techniques. In this paper, two different clustering approaches are used and compared for the delineation of homogeneous regions. The first one is the Hierarchical clustering approach, which is widely used for regionalization studies. The second one is the Fuzzy C-Means technique which allow a station belong, at different grades, to several regions.  The optimal number of regions is based on fuzzy cluster validation measures. The experimental results of both approaches are similar which confirm the delineated homogeneous region for this case study. Finally, the stepwise regression model using the forward selection approach is applied for the flood frequency estimation in each found homogeneous region.
Carlos Alvez and Aldo Vecchietti. Efficiency Analysis in Content Based Image Retrieval Using RDF Annotations
Abstract: Nowadays it is common to combine low-level and semantic data for image retrieval. The images are stored in databases and computer graphics algorithms are employed to get the pictures. Most of the works consider both aspects separately. In this work, using the capabilities of a commercial ORDBMS a reference architecture was implemented for recovering images, and then a performance analysis is realized using several index types to search some specific semantic data stored in the database via RDF triples. The experiments analyzed the mean recovery time of triples in tables having a hundred of thousands to millions of triples. The performance obtained using Bitmap, B-Tree and Hash Partitioned indexes are analyzed. The results obtained with the experiences performed are implemented in the reference architecture in order to speed up the pattern search.
Oscar Alejandro Carrizales-Turrubiates, Nelson Rangel-Valdez and Jose Torres-Jimenez. Optimal Shortening of Covering Arrays
Abstract: A Covering Array (CA), denoted by CA(N ; t, k, v), is a matrix of size N \times k with entries from the set {0,1,2,...,v-1\}, where in all the submatrices of size N \times t appears each combination of symbols (t-tuple) derived from v^{t} at least once. Here, t is called the strength, k the number of factors and v the alphabet. We called t-wise to the set of all the t-tuples that must be covered in each subset of t columns. The CAs are combinatorial structures that extend the notion of the orthogonal arrays and have impressive applications in the area of software testing. In this document is defined the Problem of the Optimal Reduction of Covering Arrays (PORCA) together with exact and greedy approaches to solve it. PORCA searches a submatrix of a given size in a CA or a matrix that is nearly a CA; the submatrix must minimize the number of missing $t$-wise. PORCA finds applications as a initialization function of a metaheuristic or in the construction of new CAs. This approach is more efficient in general than a random initialization function.
Arturo Rodriguez-Cristerna, Jose Torres-Jimenez, Ivan Rivera-Islas, Cindy Hernandez-Morales, Hillel Romero-Monsivais and Adan Jose-Garcia. A Mutation-Selection Algorithm for the Problem of Minimum Brauer Chains
Abstract: This paper aims to face the problem of getting Brauer chains of minimum length by using a Mutation-Selection (MS) algorithm and a representation based on the Factorial Number System (FNS). We explain our MS strategy and report the experimental results for a benchmark considered difficult to show that this approach is a viable alternative to solve this problem. Also, is reviewed the fine-tuning process for the MS algorithm, which was done with the help of Covering Arrays (CA) and the solutions of a Diophantine Equation (DE).
Leticia Cervantes and Oscar Castillo. Intelligent control of nonlinear dynamic plants using a new granular approach and type-2 fuzzy logic
Abstract: In this paper we present results with a new approach for intelligent control of non-linear dynamical plants. First we present the proposed approach for intelligent control using a hierarchical modular architecture with type-2 fuzzy logic used for combining the outputs of the modules. Then, the approach is illustrated with several cases: aircraft control, shower control and the three tank problem. Finally, genetic algorithms are also used to automate the control designs process.
Pedro Cadena and Leonardo Garr. Fuzzy Case-Based Reasoning for Managing Strategic and Tactical Reasoning in StarCraft
Abstract: We present the combination of Fuzzy sets and Case-Based Reasoning (FCBR) to deal with strategic and tactical management in the real-time strategy environment of StarCraft. Case-based reasoning is a problem solving AI approach that uses past experience to deal with ac- tual problems. Fuzzy set theory is used in case representation to provide a characterization of imprecise and uncertain information. The results revealed that our system can successfully reason about strategies and tactics, defeating the built-in AI of StarCraft. The principal conclusion was that FCBR can reason with abstract information and a large space of actions. Moreover, the resulting system shows its potential to incorpo- rates human knowledge and can effectively adapt to varying conditions of the map.
Enrique Naredo and Oscar Castillo. ACO-tuning of a fuzzy controller for the Ball and beam problem
Abstract: We describe the use of Ant Colony Optimzation (ACO) for the ball and beam control problem, in particular for the problem of tuning a fuzzy controller of Sugeno type, in our study case it has four inputs, each of them with two membership functions, we consider for the interpolation point for every pair of membership function as the main parameter and their individual shape as secondary ones in order to get tune de fuzzy controller by using different versions of ACO algorithms. Simulation results show that using ACO and coding the problem with just three parameters instead of six, allows us to find an optimal set of membership function parameters for the fuzzy control system with less computational effort needed.
Fevrier Valdez, Patricia Melin and Oscar Castillo. Bio-Inspired Optimization Methods on Graphic Processing Units for Minimization  of Complex Mathematical Functions
Abstract: Although GPUs have been traditionally used only for computer graphics, a recent approach called GPGPU (General-purpose computing on graphics processing units) allows the GPUs to perform numerical computations usually handled by CPU [1]. The advantage of using GPUs for general purpose computation is the performance speed up that can be achieved due to the parallel architecture of these devices. This paper describes the use of Bio-Inspired Optimization Methods such as Particle Swarm Optimization and Genetic Algorithms on GPUs to demonstrate the performance that can be achieved using this technology with regard to use CPU primarily.
Sergio Jimenez Vargas and Alexander Gelbukh. SC spectra: A linear-time soft cardinality approximation for text comparison
Abstract: Soft cardinality (SC) is a softened version of classic cardinality in set theory. Given prohibitive computing cost of SC denition (exponential order), an approximation quadratic in the number of terms in the text was previously proposed. In this paper, we present a new linear-time method for computing SC dividing the text string in several sub-strings (i.e. q-grams) of dierent size. SC in combination with known resem- blance coecients allow the construction of similarity functions useful for text comparison. These similarity measures have been used in the past for addressing an entity resolution problem (name matching), outperforming a state-of-the-art method such as SoftTFIDF. The new proposed method for approximating soft cardinality  named SC spectra  improves those previous results in time and performance allowing the new method to be used with larger documents such as those in 9 classic information re- trieval collections. SC spectra outperformed SoftTFIDF and cosine tf-idf baselines with an approach that do not require term weighting.
Inverse Kinematics Solution for Robotic Manipulators Using a CUDA-based Parallel Genetic Algorithm
Abstract: Inverse kinematics is one of the most basic problems that needs to be solved when using robot manipulators in a work environment. A closed-form solution is heavily dependent on the geometry of the manipulator. A solution may not be possible for certain robots. On the other hand, there may be an infinite number of solutions, as is the case of highly redundant manipulators. We propose a Genetic Algorithm (GA) to approximate a solution to the inverse kinematics problem for both the position and orientation. This algorithm can be applied to different kinds of manipulators. Since typical GAs may take a considerable time to find a solution, a parallel implementation of the same algorithm (PGA) was developed for its execution on a CUDA-based architecture. A computational model of a PUMA 500 robot was used as a test subject for the GA. Results show that the parallel implementation of the algorithm was able to reduce the execution time of the serial GA significantly while also obtaining the solution within the specified margin of error.
Oscar Herrera and Miguel Gonzalez. Optimization of Parameterized Compactly Supported Orthogonal Wavelets for Data Compression
Abstract: In this work we review the parameterization of filter coefficients of compactly supported orthogonal wavelets used to implement the discrete wavelet transform. We also present the design of wavelet based filters as a constrained optimization problem where a genetic algorithm can be used to improve the compression ratio on gray scale images by minimizing their entropy and we develop a quasi-perfect reconstruction scheme for images. Our experimental results report a significant improvement over previous works and they motivate us to explore other kinds of perfect reconstruction filters based on parameterized tight frames.
Yulia Ledeneva, René Arnulfo García Hernández, Romyna Montiel Soto, José Rafael Cruz Reyes and Alexander Gelbukh. EM Clustering Algorithm for Automatic Text Summarization
Abstract: Automatic text summarization has emerged as a technique to access only to useful information. In order to known the quality of the automatic summaries produced by a system, in DUC (Document Understanding Conference) 2002 it has developed a standard human summaries gold collection of 567 documents of single news. In this conference only five systems could outperforms the baseline heuristic in single extractive summarization task. So far, some approaches have got good results combining different strategies with language-dependent knowledge. In this paper we present a competitive method based on an EM clustering algorithm for improving the quality of the automatic summaries using practically non language-dependent knowledge. Also is presented a comparison of this method with three text models.
Lisbeth Rodriguez and Xiaoou Li. A Support-Based Vertical Partitioning Method for Database Design
Abstract: In association rule mining, support is a measure of association between two sets of items, which indicates the relative occurrence of both sets within the overall set of transactions. In this paper, we propose a support-based vertical partitioning method that is easy to implement and can find an optimal vertical partitioning scheme. We present several experimental results to clarify the validness of the proposed method.
Juan Irving Vasquez and L. Enrique Sucar. Next-Best-View Planning for 3D Object Reconstruction Under Positioning Error
Abstract: To acquire a 3D model of an object it is necessary to plan a set of locations, called views, where a range sensor will be placed. The problem is solved in greedy manner, by selecting iteratively next-best-views. When a mobile robot is used, we have to take into account positioning errors, given that they can affect the quality and efficiency of the plan. We propose a method to plan safe views'' which are successful even when there is positioning error. The method is based on a reevaluation of the candidate views according to their neighbors, so view points which are safer against positioning error are preferred. The method was tested in simulation with objects of different complexities. Experimental results show that the proposed method achieves similar results as the ideal case without error, reducing the number of views required against the standard approach that does not consider positioning error.
Ildar Batyrshin, Martine Ceberio and Vladik Kreinovich. No Free Lunch Result for Interval and Fuzzy Computing: When Bounds Are Unusually Good, Their Computation is Unusually Slow
Abstract: On several examples from interval and fzuzy computations and from related areas, we show that when the results of data processing are unusually good, their computation is unusually complex. This makes us think that there should be an analog of Heisenberg's uncertainty principle well known in quantum mechanics: when we an unusually beneficial situation in terms of results, it is not as perfect in terms of computations leading to these results. In short, nothing is perfect.
Christelle Jacob, Didier Dubois, Janette Cardoso, Martine Ceberio, Vladik Kreinovich and Ildar Batyrshin. Estimating Probability of Failure of a Complex System Based on Inexact Information about Subsystems and Components, with Applications to Aircraft Maintenance
Abstract: In many real-life applications (e.g., in aircraft maintenance), we need to estimate the probability of failure of a complex system (such as an aircraft as a whole or one of its subsystems). Complex systems are usually built with redundancy allowing them to withstand the failure of a small number of components. In this paper, we assume that we know the structure of the system, and, as a result, for each possible set of failed components, we can tell whether this set will lead to a system failure. For each component A,
we know the probability P(A) of its failure with some uncertainty: e.g., we know the lower and upper bounds for this probability. Usually, it is assumed that failures of different components are independent events. Our objective is to use all this information to estimate the probability of failure of the entire the complex system. In this paper, we describe a new efficient method for such estimation based on Cauchy deviates.
Sofia N. Galicia-Haro and Alexander Gelbukh. Age-Related Temporal Phrases in Spanish and French
Abstract: This paper reports research on temporal expressions. The analyzed phrases include a common temporal expression for a period of years reinforced by an adverb of time. We found that some of those phrases are age-related expressions. We analyzed samples obtained from the Internet for Spanish and French to determine appropriate annotation for marking up text and translation. We present the results for a group of selected classes.
Content determination through planning for a flexible game tutorial
Abstract: The goal of this work is to design and implement an agent which generates hints for a player in a first person shooter game. The agent is a computer-controlled character which collaborates with the player during the tutorial phase of the game. Such agent uses state of the art reasoning techniques from the area of artificial intelligence planning in order to come up with the content of the instructions. Moreover, it applies techniques from the area of natural language generation to generate the hints. As a result the instructions are both causally appropriate at the point in which they are uttered and relevant to the goal of the game. We evaluated the tutorial agent with human users and we obtained encouraging results in objective as well as subjective metrics.
Daniela Sanchez and Patricia Melin. A New Model of Modular Neural Networks with Fuzzy Granularity for Pattern Recognition and its optimization with Hierarchical Genetic Algorithms
Abstract: In this paper we propose a new model of a Modular Neural Network (MNN) with fuzzy integration based on granular computing.  The topology and parameters of the model are optimized with a Hierarchical Genetic Algorithm (HGA). The model was applied to the case of human recognition to illustrate its applicability.  The proposed method is able to divide the data automatically into sub modules, to work with a percentage of images and select which images will be used for training. We considered to test this method the problem of human recognition based on ear, we used a database with 77 persons (with 4 images each person).
Fundamental features of metabolic computing
Abstract: The cell represents the basic unit of life. It can be interpreted as a chemical machine. The present knowledge of molecular biology allows the characterization of the metabolism as a processing unit/concept. This concept is an evolutionary biochemical product which has been developed over millions of years. In this paper we will present and discuss the analyzed features of metabolism, which represent the fundamental features of the metabolic computing process. Furthermore, we will compare this molecular computing method with methods which are defined and discussed in computer science. The comparison shows that there is no method in the field of computer science which uses the analyzed metabolic features at all. This is the reason why we formalize the metabolic processing method.
Carlos Roberto Domínguez Mayorga, María Angélica Espejel Rivera, Luis Enrique Ramos Velasco, Julio Cesar Ramos Fernández and Enrique Escamilla-Hernández. Wavelet neural network algorithms with applications in approximation signals
Abstract: In this paper we present algorithms which are adaptive, these algorithms are based on neural networks and wavelet series to build wavenets function approximators. Results are shown in numerical simulation of two wavenets approximators architectures: the first is based on a wavenet with which they approach the signals under study where the parameters of the neural network are adjusted online, the other uses a scheme approximators with an IIR filter to the output of wavenet, which helps reduce convergence time to a desired minimum error.
Gabriel Gastón Infante Lopez and Martín Ariel Domínguez. A New General Grammar Formalism for Parsing.
Abstract: We introduce Probabilistic Constrained W-grammars(PCW-grammars), a two-level formalism capable of capturing grammatical frameworks used in three different state of the art grammar formalism, namely Bilexical Grammars, Markov Rules, and Stochastic Tree Substitution Grammars. For each of them we provide an embedding into PCW-grammars, which allows us to derive properties about their expressive power and consistency, and relations between the formalisms studied.
Pablo Hernández Torres, María Angélica Espejel Rivera, Luis Enrique Ramos Velasco, Julio Cesar Ramos Fernández and Julio Waissman Vilanova. Type-2 Neuro-Fuzzy Modeling for a Batch Biotechnological Process
Abstract: In this paper we developed a Type-2 Fuzzy Logic System (T2FLS) in order to model a batch biotechnological process. Type-2 fuzzy logic systems are suitable to drive uncertainty like that arising from process measurements. The developed model is contrasted with an usual type-1 fuzzy model driven by the same uncertain data. Model development is conducted, mainly, by experimental data which is comprised by thirteen data sets obtained from different performances of the process, each data set presents a different level of uncertainty. Parameters from models are tuned with gradient-descent rule, a technique from neural networks field.
Fernando Gaxiola, Patricia Melin, Fevrier Valdez and Oscar Castillo. Modular neural networks with type-2 fuzzy integration for pattern recognition using the iris biometric measure
Abstract: This paper presents a new modular neural network architecture with modules and sub modules as a system for pattern recognition based on the iris biometric measurement of persons. In this system, the properties of the person iris database are enhanced with image processing methods, and the coordinates of the center and radius of the iris are obtained to make a cut of the area of interest by removing the noise around the iris. The inputs to the modular neural network are the processed iris images and the output is the number of the person identified. The integration of the modules was done with a type-2 fuzzy integrator of the level of the sub modules, and with a gating network at the level of the modules.
Mohammad Sadegh Rasooli, Heshaam Faili and Behrouz Minaei-Bidgoli. Unsupervised Identification of Persian Compound Verbs
Abstract: One of the main tasks related to multiword expressions (MWEs) is compound verb identification. There have been so many works on unsupervised identification of multiword verbs in many languages, but there has not been any conspicuous work on Persian language yet. Persian multiword verbs (known as compound verbs), are a kind of light verb construction (LVC) that have syntactic flexibility such as unrestricted word distance between the light verb and the nonverbal element. Furthermore, the nonverbal element can be inflected. These characteristics have made the task in Persian very difficult. In this paper, two different unsupervised methods have been proposed to automatically detect compound verbs in Persian. In the first method, extending the concept of pointwise mutual information (PMI) measure, a bootstrapping method has been applied. In the second approach, K-means clustering algorithm is used. Our experiments show that the proposed approaches have gained results superior to the baseline which uses PMI measure as its association metric.
Alejandro Molina, Juan-Manuel Torres-Moreno, Iria Da Cunha, Eric Sanjuan, Gerardo Sierra and Patricia Velazquez-Morales. Contextual Discourse Segmentation for Sentence Compression
Abstract: Earlier studies have raised the possibility of summarizing at the level of the sentence. This simplification should help in adapting textual content in a limited space. Therefore, sentence compression is an important resource for automatic summarization systems.
However, there are few studies that consider the context for the sentence compression task; to our knowledge, none for the Spanish language. In this paper, we study the relationship between discourse segmentation and compression in context for sentences in Spanish.
We use a discourse segmenter and observe to what extent the passages deleted by annotators fit in discourse structures detected by the system. The main idea is to verify whether the automatic discourse segmentation can serve as a basis for identifying segments to be eliminated in the sentence compression task. We show that discourse segmentation could be a first solid step towards a sentence compression system.
Pilar Pozos Parra, Laurent Perrussel and Jean Marc Thevenin. Belief Merging using Normal Forms
Abstract: Belief merging aims to conciliate multiple possibly inconsistent belief bases into a consistent common belief base. To handle inconsistency some operators have been proposed. Most of them do not consider inconsistent bases. $PS$-$Merge$ is an alternative method of merging that uses the notion of Partial Satisfiability and allows us to take into account inconsistent bases. $PS$-$Merge$ needs the bases represented as DNF formulas, nevertheless, many practical problems are easily represented in his CNF. The aim of this paper is to extend the notion of Partial Satisfiability in order to consider bases represented as CNF formulas. Moreover, we consider Prime Normal forms in order to define a method that allows us to implement $PS$-$Merge$ for difficult theories. We also show that once the belief bases are represented as sets of normal forms, $PS$-$Merge$ is polynomial.
Lisbeth Rodríguez, Xiaoou Li and Pedro Mejía Alvarez. An Active System for Dynamic Vertical Partitioning of Relational Databases
Abstract: Vertical partitioning is a well known technique to improve query response time in relational databases. This consists in dividing a table into a set of fragments of attributes according to the queries run against the table. In dynamic systems the queries tend to change with time, so it is needed a dynamic vertical partitioning technique which adapts the fragments according to the changes in query patterns in order to avoid long query response time. In this paper, we propose an active system for dynamic vertical partitioning of relational databases, called DYVEP (DYnamic VErtical Partitioning). DYVEP uses active rules to vertically fragment and refragment a database without intervention of a database administrator (DBA), mantaining an acceptable query response time even when the query patterns in the database suffer changes. Experiments with the TPC-H benchmark demonstrate efficient query response time.
Dora-Luz Flores, Manuel Castañón-Puga and Carelia Gaxiola-Pacheco. A Complex Social System Simulation using Type-2 Fuzzy Logic and Multiagent System
Abstract: The use of new simulation techniques to represent complex systems such as social systems is increasingly accepted, such as multi-agent systems. In addition to represent the uncertainty that requires them also being accepted use of Fuzzy Logic and particularly Type-2 Fuzzy Logic. The case of study presented in this paper is a system where there are three types of agents, each is assigned a specific role and goals to accomplish so individually and in teams, the success or failure is determined by group performance rather than individual achievement. It is also taken into account the environment or context as another type of agent. Fuzzy inference systems are defined for each of the agents to represent the concepts interpretation.
Asdrúbal López Chau, Xiaoou Li, Wen Yu, Jair Cervantes and Pedro Mejía Alvarez. Border samples detection for data mining applications using non convex hulls
Abstract: Border points are those instances located at the outer margin of dense clusters of samples. The detection is important in many areas such as data mining, image processing, robotics, GIS and pattern recognition. In this paper we propose a novel method to detect border samples. The proposed method makes use of a discretization and works on partitions of the set of points. Then the border samples are detected by applying an algorithm similar to the presented in reference [8] on the sides of convex hulls. We apply the novel algorithm on classification task of data mining; experimental results show the effectiveness of our method.
Marcela Quiroz Castellanos, Laura Cruz Reyes, José Torres Jiménez, Claudia Gómez Santillán, Mario César López Locés, Jesús Eduardo Carrillo Ibarra and Guadalupe Castilla Valdez. Improving the Performance of Heuristic Algorithms Based on Causal Inference
Abstract: Causal inference can be used to construct models that explain the performance of heuristic algorithms for NP-hard problems. In this article we show the application of causal inference to the algorithmic optimization process through an experimental analysis to assess the impact of the parameters that control the behavior of a heuristic algorithm. As a case study we present an analysis of the main parameters of one state of the art procedure for the Bin Packing Problem (BPP). The studies confirm the importance of the application of causal reasoning as a guide for improving the performance of the algorithms.
Fernando M. Montes-González and Fernando Aldana-Franco. The Evolution of Signal Communication for the e-puck Robot
Abstract: In this paper we report our experiments with the e-puck robots for developing a communication system through evolutionary robotics. In order to do the latter we follow the evolutionary approach by using neural Networks and Genetic Algorithms. The robots develop a communication scheme for solving tasks like: locating food areas, avoiding obstacles, approaching light sources, and locating sound-source(other robots emitting sounds). Evorobot* and Webots simulators are used as tools for computing evolutionary processes and the optimization in the weights of neural controllers.
Hossain Zulfikar, Yeap Wai and Olaf Diegel. Testing a theory of perceptual mapping using robots
Abstract: This paper describes the implementation of a new approach to mapping using a mobile robot that is based upon a theory of human perceptual mapping. Its key feature is that it does not track the robot’s position as the robot moves through the environment and it does not perform error correction due to robot sensors. It produces an approximate map of the environment that is adequate for orienting oneself and for knowing where things are located. It is the kind of map that humans appear to have.
Case Studies on Invariant Generation Using a Saturation Theorem Prover
Abstract: Automatic understanding of the intended meaning of computer programs is a very hard problem, requiring intelligence and reasoning. In this paper we evaluate a program analysis method, called symbol elimination, that uses first-order theorem proving techniques to automatically discover non-trivial program properties. We discuss implementation details of the method, present experimental results, and discuss the relation of the program properties obtained by our implementation and the intended meaning of the programs used in the experiments.
Rahmatullah Hafiz and Richard A. Frost. Modular Natural Language Processing Using Declarative Attribute Grammars
Abstract: A system based on a general top-down parsing algorithm has been developed which allows language processors to be created as executable specifications of arbitrary attribute grammars. Declarative notation of attribute grammars allows modular construction of executable language definitions. Syntax is defined through general context-free grammar rules, and meaning is defined by associated semantic rules with arbitrary dependencies. An innovative technique allows parses to be pruned by arbitrary semantic constraints. This new technique is useful in modelling natural-language phenomena by imposing unification-like restrictions, and accommodating long-distance and cross-serial dependencies, which cannot be handled by context-free rules alone.
Ignacio Segovia Domínguez, Arturo Hernández Aguirre and Enrique Villa Diharce. Global Optimization with the Gaussian Polytree EDA
Abstract: This paper introduces the Gaussian polytree estimation of distribution algorithm, a new construction method, and its application to estimation of distribution algorithms in continuous variables. The variables are assumed to be Gaussian. The construction of the tree and the edges orientation algorithm are based on information theoretic concepts such as mutual information and conditional mutual information. The proposed Gaussian polytree estimation of distribution algorithm is applied to a set of benchmark functions. The experimental results show that the approach is robust, comparisons are provided.
Paula Hernández Hernández, Claudia Guadalupe Gómez Santillán, Laura Cruz Reyes, Carlos Alberto Ochoa Ortíz Zezzatti, Norberto Castillo García and Gilberto Rivera Zárate. Hyperheuristic for the Parameter Tuning of a Bio-Inspired Algorithm of Query Routing in P2P Networks
Abstract: The computational optimization field defines the parameter tuning problem as the correct selection of the parameter values in order to stabilize the behavior of the algorithms. This paper deals the parameters tuning in dynamic and large-scale conditions for an algorithm that solves the Semantic Query Routing Problem (SQRP) in peer-to-peer networks. In order to solve SQRP, the HH_AdaNAS algorithm is proposed, which is an ant colony algorithm that deals synchronously with two processes. The first process consists in generating a SQRP solution. The second one, on the other hand, has the goal to adjust the Time To Live  parameter of each ant, through a hyperheuristic. HH_AdaNAS performs adaptive control through the hyperheuristic considering SQRP local conditions. The experimental results show that HH_AdaNAS, incorporating the techniques of parameters tuning with hyperheuristics, increases its performance by 2.42% compared with the algorithms to solve SQRP found in literature.
Oscar Dalmau and Teresa Alarcon. MFCA: Matched Filter with Cellular Automata for Retinal Vessel Detection
Abstract: Blood vessel extraction is an important step for abnormality detection and for obtaining good retinopathy diabetic diagnosis in digital retinal images. The use of filter bank has shown to be a powerful technique for detecting blood vessels. In particular, the Matched Filter is appropriate and efficient for this task and in combination with other methods the blood vessel detection can be improved. We propose a combination of the Matched Filter with a segmentation strategy by using a Cellular Automata. The strategy presented here is very efficient and experimentally yields competitive results compared with others methods of the state of the art.
Mario Martínez Molina, Marco Antonio Moreno Armendáriz, Nareli Cruz Cortés and Juan Carlos Seck Tuoh Mora. Modeling the Prey-Predator Dynamics Via Particle Swarm Optimization and Cellular Automata
Abstract: Through the years several methods have been used to model an organism’s movement within an ecosystem modeled with cellular automata, from simple algorithms that change a cell’s state according to some pre-defined heuristic, to diffusion algorithms based on the one dimensional Navier - Stokes equation or lattice gases. In this work we show a cellular automata model of a theoretical population, where predator dynamics evolve through particle swarm optimization. Each season, predators search the best position in the lattice according to their own experience and the collective knowledge of the swarm using a fitness function that assigns a quality level according to local prey density in each site of the lattice. To the best of our knowledge, such approach has never been used to model predator dynamics in a spacial model. The experiments show oscillations typical of Lotka - Volterra systems, where for each increase in the size of the population of predators, there is a decrease in the size of the population of preys.
Weighted Profile Intersection Measure for Profile-based Authorship Attribution
Abstract: This paper introduces a new similarity measure called weighted profile intersection (WPI) for profile-based authorship attribution (PBAA). Authorship attribution (AA) is the task of determining which, from a set of candidate authors, wrote a given document. Under PBAA an author's profile is created by combining information extracted from sample documents written by the author of interest. An unseen document is associated with the author whose profile is most similar to the document. Although competitive performance has been obtained with PBAA, the method is limited in that the most used similarity measure only accounts for the number overlapping terms among test documents and authors' profiles. We propose a new measure for PBAA, WPI, which takes into account inter-author term penalization and normalization factors, besides the number of overlapping terms. Intuitively, in WPI we rely more on those terms that are (frequently) used by the author of interest and not (frequently) used by other authors when computing the similarity of the author's profile and a test document. We evaluate the proposed method in several AA data sets, including many data subsets from Twitter. Experimental results show that the proposed technique outperforms the standard PBAA method in all of the considered data sets; although the baseline method resulted very effective. Further, the proposed method achieves performance comparable to classifier-based AA methods (e.g., methods based on SVMs), which often obtain better classification results at the expense of limited interpretability and a higher computational cost.
Mauricio Osorio, José Luis Carballido, Claudia Zepeda and Zenaida Cruz. Characterization of argumentation semantics in terms of the $MM^r$ Semantics
Abstract: Argumentation theory studies the fundamental mechanism humans use in argumentation and explores ways to implement this mechanism on computers. Dung's approach, presented in \cite{Dung95}, is a unifying framework which has played an influential role on argumentation research. The central notion of Dung's approach is the \emph{acceptability of the arguments}, where different argumentation semantics represent different patterns of selection of arguments. On the other hand, Baroni \etal, in \cite{BarGiaGui05}, suggested another well known argumentation approach that is based on a solid concept in graph theory. In this paper, we show that, a logic programming semantics, called $MM^r$, can be used to characterize two argumentation semantics: the $CF2$ introduced  by P. Baroni et al.~\cite{BarGiaGui05}  and the preferred semantics defined by Dung in \cite{Dung95}. The $MM^r$ \cite{NiOsZe:FI2011} is based on the \emph{the minimal model semantics}. The characterization of these argumentation semantics by the $MM^r$ semantics suggests a new perception of these argumentation semantics in terms of \emph{logic foundations}.
Roque Enrique Lopez Condori and Javier Tejada. MFSRank: An unsupervised method to extract keyphrases using semantic information
Abstract: This paper presents an unsupervised graph-based method to extract keyphrases using semantic information. The proposed method has two phases.  In the first, MFS (maximal frequent sequences) are extracted and these form the nodes of the graph. The connection between two nodes is established according to common statistical information and semantic relatedness. In the second phase, MFS obtained in the first phase are ranked using the PageRank algorithm. The experimental results are encouraging because they indicate that the proposed method has good performance.
Dmitrijs Rutko. Fuzzified Tree Search in Real Domain Games
Abstract: Fuzzified game tree search algorithm is based on the idea that the exact game tree evaluation is not required to find the best move. Therefore, pruning techniques may be applied earlier resulting in faster search and greater performance. Applied to an abstract domain, it outperforms the existing ones such as Alpha-Beta, PVS, Negascout, NegaC*, SSS*/ Dual* and MTD(f). In this paper we present experimental results in real domain games, where proposed algorithm demonstrated 10 percent performance increase over existing algorithms.
Uriel H. Hernandez-Belmonte, Victor Ayala-Ramirez and Raul E. Sanchez-Yanez. A Comparative Review of Two-Pass Connected Component Labeling Algorithms
Abstract: In this paper, we show a comparative review of Connected Component Labeling (CCL) methods, focused in two-pass variants, including their elements and implementation issues. We analyze the main elements used by these CCL algorithms and their importance for the performance of the methods using them. We present some experiments using a complex image set and evaluating the performance of each algorithm under analysis.
Felix Emilio Luis-Pérez, Felipe Trujillo-Romero and Wilebaldo Martinez-Velazco. Control of a service robot using the Mexican Sign Language
Abstract: This paper presents the results obtained by recognizing the Mexican sign language (LSM) alphabet as control element for a service robot. For recognizing the signs is used a segmentation of images using active contours. Once segmented, we proceed to obtain the signature of the corresponding sign and is trained a neural network for its recognition later. Every symbol of LSM is assigned a task which the robotic system must perform, we defined just eight different tasks. System was validated using a simulation environment and a real system. For the real case we used the mobile platform (Powerbot) equipped with a manipulator with 6 degrees of freedom (PowerCube). For simulation was used the model of the mobile platform and manipulator, using RoboWorks as simulation environment. In both simulation and real tests were used different images to the learned by system obtaining in both cases a recognition rate of 95.8%.
Comparative Study of BSO and GA for the Optimizing Energy in Ambient Intelligence
Abstract: One of the concerns of humanity today is the energy saving because we need to reduce energetic costs to promote economical, political and environmental sustainability. The industrial and commercial users want to increase energy efficiency and maximize their benefit. One of the goals in Ambient Intelligence  (AmI) is to provide automatic and customized solutions for the final user. As we have mentioned before, in recent times one of the main priorities is energy management. The goal in this project is to develop a system that will be able to find optimal configurations in energy saving through management light. In this paper we will make a comparison between Genetic Algorithm (GA) and Bee Swarm Optimization (BSO), applying them in the optimization and management of lighting, using and intelligent office as the main scenario, and taking into account the activity of the users, size of area, quantity of lights, and power lights. Applying the two algorithms and using and Test named Test of Wilcoxon we compares the results, that are more optimal is with the Genetic Algorithm (AG), these solutions which are closer to the desired lighting is needed for each activity.
Philippe Fournier-Viger, Roger Nkambou, Andre Mayers, Engelbert Mephu Nguifo and Usef Faghihi. An Hybrid Expertise Model to Support Tutoring Services in Robotic Arm Manipulations
Abstract: To build an intelligent tutoring system, a key task is to define an expertise model that can support appropriate tutoring services. However, for some ill-defined domains, classical approaches for representing expertise do not work well. To address this issue, we illustrate in this paper a novel approach which is to combine several approaches into a hybrid model to support tutoring services in procedural and ill-defined domains. The idea is to combine several approaches to overcome their limitations. We illustrate this idea in a tutoring system for operating the Canadarm2 robotic arm on the international space station. To support tutoring services in this ill-defined domain, we have developed a model combining three approaches: (1) a data mining approach for automatically building a task model from user solutions, (2) a cognitive model to cover well-defined parts of the task and spatial reasoning, (3) and a 3D path-planner to cover all other aspects of the task. Experimental results show that the hybrid model allows providing assistance to learners that is much richer than what could be offered by each individual approach.
Development of a system of electrodes for reading consents activity of an amputated leg (up-knee) and the prosthesis application.
Abstract: It is reported the design of electrodes which was standardized and positioned based on the study of anatomy and motor units identified by the nerve branches of an amputated leg (up-knee) [1], obtaining the myoelectric signals of maximum amplitude of 20mV [2]. The initial identification of the optimal position of the electrode was characterized with myoelectrography in order to determine movements that the patient makes consciously. The myoelectrical signal was achieved taking into account electrochemical schema of the cellular membrane [3] based on the fields and frequencies, recognizing the circuit structural materials that provide sufficient resistivity to the frequencies spurious [4], isolating each motor unit [5]. The results show that it is possible to have a cleaned signal which describes the movement that the patient desires consciously.
Leticia Flores, Oleg Starostenko and Gustavo Rodriguez Gomez. Similarity Metric Behavioural for Image Retrieval Modelling in the Context of Spline Radial Basis Function
Abstract: The similarity meassure behavoirals implies a wide spectrum of items to specify its performance. Image retrieval based for similarity metrics obtains remarkable results in comparison at discrimination methods. The similarity metric is used in the matching process between image query from user and image collection. The discrimination method taked here, uses angle vector comparison to detect distances between queries and collections of images. This is the work where a radial basis function is applied as similarity meassure whose results are compared versus a discriminant method as GPCA algorithm for retrieval image purposes. Spline radial basis function is employed in test with several image repositories. The results of retrieval process report 64 \% avoiding a classification phase in contrast with 63 % of discrimination with GPCA method. The tests illustrate preliminary results of this research in the context of image retrieval.
Efficient pattern recalling using a non iterative Hopfield associative memory
Abstract: Actually associative memories have demonstrated to be useful in the pattern processing area. Hopfield model is an autoassociative memory that has problems in the recalling phase; one of them is the time of convergence or non convergence in certain cases with patterns bad recovered.  In this paper a new algorithm for the Hopfield associative memory eliminates the iteration reducing time computing and uncertainty on pattern recalling. This algorithm is implemented using a corrective vector which is extracted on dependency of the first iterated pattern. The corrective vector is used to adjust misclassifications in output recalled patterns. Results show a good performance of the proposed algorithm, providing an alternative tool for the pattern recognition field.
Rofolfo Abraham Pazos Rangel, Juan Javier González Barbosa and Marco Antonio Aguirre Lam. Semantic Model for Improving the Performance of Natural Language Interfaces to Databases
Abstract: Despite the fact that since the late 60s many Natural Language Interfaces to Databases (NLIDBs) have been developed, up to now many problems continue, which prevent the translation process from natural language to SQL to be totally successful. Some of the main problems that have been encountered relate to 1) achieving domain independence, 2) the use of words or phrases of different syntactic categories for referring to tables and columns, and 3) semantic ellipsis. This paper introduces a new method for modeling databases that includes relevant information for improving the performance of NLIDBs. This method will be useful for solving many problems found in the translation from natural language to SQL, using a database model that contains linguistic information that provides more semantic information than that found in conventional database models (such as the extended entity-relationship model) and those used in previous NLIDBs.
José De Jesús Uriarte Adrián, Carlos Guillén Galván and Daniel Valdes Amaro. A Modification of the Mumford-Shah Functional for Segmentation of Digital Images with Fractal Objects
Abstract: In this paper we revisit the Mumford-Shah functional, one of the most studied variational approaches to image segmentation.  The contribution of this work is to propose a modification of the Mumford-Shah functional that includes Fractal Analysis to improve the segmentation of images with fractal or semi-fractal objects. Here we show how the fractal dimension is calculated and embedded in the functional minimisation computation to drive the algorithm to use both, changes in the image intensities and the fractal characteristics of the objects, to obtain a more suitable segmentation. Experimental results confirm that the proposed modification improves the quality of the segmentation in images with fractal objects or semi fractal such as medical images.
Grigori Sidorov, A. Herrera, J. Posadas, S. Galicia, L. Chanona. Heuristic Algorithm for Extraction of Facts using Relational Model and Syntactic Data
Abstract: From the semantic point of view, the information is usually contained in units smaller than sentences called facts. However, the identification of facts in a text is not a trivial task. In the present work, we describe an algorithm for the extraction of facts from sentences using a simple representation based on relational model, focusing on texts that contain a lot of facts by nature (structured textbooks). The algorithm is based on syntactic analysis. It can be useful for systems that allow making queries, summaries or other processing. Our experiments are conducted for Spanish. We got higher precision than similar methods.
Lucas Agussurja and Hoong Chuin Lau. A POMDP Model for Guiding Taxi Cruising in a Congested Urban City
Abstract: We consider a partially observable Markov decision process (POMDP) model for improving a taxi agent cruising decision in a congested urban city. Using real-world data provided by a large taxi company in Singapore as a guide, we derive the state transition function of the POMDP. Specifically, we model the cruising behavior of the drivers as continuous-time Markov chains. We then apply dynamic programming algorithm for finding the optimal policy of the driver agent. Using a simulation, we show that this policy is significantly better than a greedy policy in congested road network.
Liliana Argotte, Julieta Noguez and Gustavo Arroyo. Intelligent Learning System based on SCORM Learning Objects
Abstract: This paper shows the creation of the adaptive SCORM sequencing models, taking advantage of the latest developments offered by the artificial intelligence field, to provide the best choice to the student, based in learning objects, using a tutor model in self learning. The Tutor uses decision networks also called influence diagrams, to improve the development of resources and learning materials in a learning content management system, to offer students the best pedagogical decision according to their performance. The intelligent learning system is validated in an online environment. The results of the evaluation process in undergraduate engineering courses are encouraging because they show improvements in student's learning who used this approach, compared to those who did not use it. The paper also shows the potential application of this learning approach for power system's operators.
Tiago Matos, Yannick P. Bergamo, Valdinei F. Da Silva and Anna H. Reali Costa. Stochastic Abstract Policies for Transfer Learning in Robotic Navigation Tasks
Abstract: Most work in path-planning approaches for mobile robots does not take into account existing solutions to similar problems when learning a policy to solve a new problem, and consequently solves the current navigation problem from scratch. In this article we investigate a transfer learning technique that enables the speeding up of planning and learning in a new task by transferring knowledge from one or more related source tasks. Here we represent the knowledge learned as a stochastic abstract policy, which can be induced from a training set given by a set of navigation examples of state-action sequences executed successfully by a robot to achieve a specific goal in a given environment. We propose both a probabilistic and a nondeterministic abstract policy, in order to preserve the occurrence of all actions identified in the inductive process. Experiments carried out attest to the effectiveness and efficiency of our proposal.
Adam Puchalski and Urszula Markowska-Kaczmar. Similar image recognition inspired by visual cortex
Abstract: The paper presents a method of image recognition which is inspired by research in visual cortex. The architecture of our  model called CaNN is similar to the  one proposed in neocognitron, LeNet or HMAX networks. It is composed of many consecutive layers with various number of planes (receptive fields). Units in the corresponding positions of the planes in one  layer  receive input from the same region of the precedent layer. Each plane is sensitive on one pattern. The method assumes that the pattern recognition is based on edges, which are found in the input image using Canny detector. Then the image is processed by the network. The novelty of our method lies in the way of information processing in each layer and an application of clustering module in the last layer where the patterns are recognized. The transformations performed by the CaNN model find the own representation of the training patterns.  The method is evaluated in the experimental way. The obtained results are  promising.
Intelligent robust control of dynamic systems with partial unstable generalized coordinates based on quantum fuzzy inference
Abstract: This article describes a new method of quality control dynamically unstable object based on quantum computing. This method enables to control object in unpredicted situations with incomplete information about the structure of the control object. The efficiency over other methods of intelligent control is shown on the benchmark with partial unstable generalized coordinates as stroboscopic robotic manipulator.
Miguel Colores Vargas, Mireya García-Vázquez, Alejandro Alvaro Ramírez-Acosta and Hector Perez Meana. Iris Image Evaluation for Non-cooperative Biometric Iris Recognition System
Abstract: During video acquisition of an automatic non-cooperative biometric iris recognition system, not all the iris images obtained from the video sequence are suitable for recognition. Hence, it is important to acquire high quality iris images and quickly identify them in order to eliminate the poor quality ones (mostly defocused images) before the subsequent processing. In this paper, we present the results of a comparative analysis of four methods for iris image quality assessment to select clear images in the video sequence. The goal is to provide a solid analytic ground to underscore the strengths and weaknesses of the most widely implemented methods for iris image quality assessment. The methods are compared based on their robustness to different types of iris images and the computational effort they require. The experiments with the built database (100 videos from MBGC v2) demonstrate that the best performance scores are generated by the kernel proposed by Kang & Park. The FAR and FRR obtained are 1.6% and 2.3% respectively.
Mohammad Ghomi, Mohammad Taghi Moeti and Mohammad Azimi. Introduction of Neural Network Model and Fuzzy Regression Method for Short Term Load Forecasting in the Center Regional of IRAN
Abstract: Short term load forecasting plays an important role in power system studies. For reduction of costs and enhancing reliability as well as operational efficiency, load forecasting is used. Awareness of this issue provides the ability of load scheduling for generation; transmission and distribution of electrical energy .the most applicable technique for load forecasting is intelligence algorithm with statistical and fuzzy regression models. In this study, effective factors on load forecasting were evaluated as well as the using Fuzzy Regression Method (FRM) and Artificial Neural Network (ANN) for load forecasting in the center regional of Iran. The results show that the ANN is better in comparison with fuzzy regression technique while at peak load times while the fuzzy regression technique has more accuracy. It's possible to use the up and down limits of fuzzy regression in order to load forecast in specific days.
Kazi Shah Nawaz Ripon. Using Pareto-Optimality for Solving Dynamic Facility Layout Problem under Uncertainty: A Multi-Objective Evolutionary Approach
Abstract: In this paper, we investigate an evolutionary approach for solving the multi-objective dynamic facility layout problem (FLP) under uncertainty that presents the layout as a set of Pareto-optimal solutions. Research into the dynamic FLP usually assumes that data for each time period are deterministic and known with certainty. However, production uncertainty is one of the most challenging aspects in manufacturing environments in the 21st century. Only recently have researchers begun to model FLP with uncertainty. Unfortunately, most of the solution methodologies developed to date for both static and dynamic FLP under uncertainty focus on optimizing just a single objective. To the best of our knowledge, the use of Pareto-optimality in multi-objective dynamic FLPs under uncertainty has not been studied as yet. In addition, the proposed approach has been tested using backward pass heuristics to determine the effectiveness of that approach in optimizing multiple objectives. Results show that our approach is an efficient evolutionary dynamic FLP approach for optimizing multiple objectives simultaneously under uncertainty.

# Special session: Polibits

Burak Parlak. Automatic Bubble Detection in Cardiac Video Imaging
Abstract: Bubble recognition is a challenging problem in a broad range from mechanics to medicine. These gas-filled structures whose pattern and morphology alter in their surrounding environment would be counted either manually or with computational recognition procedures. In cardiology, user dependent bubble detection and temporal counting in videos require special trainings and experience due to ultra fast movement and inherent noise. In this study, we propose an efficient recognition routine to increase emboli detection objectivity. Firstly, we started to compare five different methods on two different synthetic data sets emulating cardiac chamber environment with increasing speckle noise levels. Secondly, three successful methods in simulation are applied to real echocardiographic video records which are already segmented by variational active contours in order to extract Left Atrium (LA). Our detection rate of proposed method was 96.7% and the others were 92.4% and 87.3%. We conclude that our approach would be useful in long lasting video processing and would be applied to automatize bubble recognition.
Amitava Mandal, Anjan Kumar Kakati, M. Chandrasekhran and Amit Kumar Singh. Prediction of optimum cutting parameters to obtain desired surface in finish pass End milling of Aluminium alloy with carbide tool using Artificial Neural Network
Abstract: End milling process is one of the common metal cutting operations used for machining parts in manufacturing industry. It is usually performed at the final stage in manufacturing a product and surface roughness of the produced job plays an important role. In general, the surface roughness affects wear resistance, ductility, tensile, fatigue strength, etc., for machined parts and cannot be neglected in design. In the present work an experimental investigation of end milling of aluminium alloy with carbide tool is carried out and the effect different cutting parameters on the response are studied with three-dimensional surface plots. An artificial neural network (ANN) is used to establish the relationship between the surface roughness and the input cutting parameters (i.e., spindle speed, feed, and depth of cut). The Matlab ANN toolbox works on feed forward back propagation algorithm is used for modeling purpose. 3-12-1 network structure having minimum average prediction error found as best network architecture for predicting surface roughness value. The network predicts surface roughness for unseen data and found that the result/prediction is better. For desired surface finish of the component to be produced there are many different combination of cutting parameters are available. The optimum cutting parameter for obtaining desired surface finish, to maximize tool life is predicted. The methodology is demonstrated, number of problems are solved and algorithm is coded in Matlab®.
Medical Diagnostic Expert System Using Apriori Algorithm
Abstract: Many learning algorithms exist that are routinely used as commercial system. However, given knowledge in health domain, it is difficult to train computers for the decision making and learning. The problem becomes complex when some common symptoms of multiple diseases are present. Some knowledge based system (KBS) like MYCIN, HELP, are available to find a particular disease. We focus on this issue and develop a medical diagnostic KBS which not only finds certain disease specifically, but also diagnoses the probability of other diseases to support in prescribing enhanced treatment. The proposed system learns based on a given knowledge, creating rules for making probable decisions and creating association among symptoms occurred mutually in previous decisions. The system is flexible for new rule generation and association symptoms.
Hamid Parvin. Improving Persian Text Classification and Clustering Using Persian Thesaurus
Abstract: This paper proposes an innovative approach to improve the performance of Persian text classification. The proposed method uses thesaurus as a helpful knowledge to obtain a better representative frequencies of words in the corpus. Three types of word relationships are considered in our used thesaurus. This is the first attempt to use a Persian thesaurus in the field of Persian information retrieval. Experimental results indicate the performance of text classification improves significantly in the case of employing Persian thesaurus rather the case of ignoring Persian thesaurus
Julio Castillo and Marina Cardenas. An Approach to Cross-Lingual Textual Entailment using Web Machine Translation Systems
Abstract: In this paper we show an approach to cross-lingual textual entailment (CLTE) by using machine translation systems such as Bing Translator and Google Translate. We experiment with a wide variety of data sets to the task of textual Entailment (TE) and evaluated the contribution of an algorithm that expands a monolingual TE corpus that seems promising for the task of CLTE. We built a CLTE corpus and we report a pro-cedure that can be used to create a CLTE corpus in any pair of languages. We also report the results obtained in our experi-ments with the three-way classification task for CLTE and we show that this results outperform the average score of RTE sys-tems. Finally, we find that using WordNet as the only source of lexical-semantic knowledge it is possibly to build a system for CLTE, which achieves comparable re-sults with the average score of RTE sys-tems for both two-way and three-way tasks.
Rafael Gonçalves, Diego Cueva, Marcos Barretto and Fábio Cozman. A Dynamic Model for Identification of Emotional Expressions
Abstract: This  paper introduces a new model for the determination of conveyed emotion during a human-machine interaction, based on  Kalman filtering of instantaneous facial expressions and the emotional trajectory over an emotional surface.
Hamid Parvin. A Max Metric to Evaluate a Cluster
Abstract: In this paper a new criterion for clusters validation is proposed. This new cluster validation criterion is used to approximate the goodness of a cluster. The clusters which satisfy a threshold of the proposed measure are selected to participate in clustering ensemble. To combine the chosen clusters, some methods are employed as aggregators. Employing this new cluster validation criterion, the obtained ensemble is evaluated on some well-known and standard datasets. The empirical studies show promising results for the ensemble obtained using the proposed criterion comparing with the ensemble obtained using the standard clusters validation criterion. Besides to reach the best results, the method gives an algorithm based on which one can find how to select the best subset of clusters from a pool of clusters.
Andrey Ronzhin, Sergey Glazkov and Jesus Savage. User Preference Model for Conscious Services in Smart Environments
Abstract: Awareness of user preferences and analysis of the current situation makes capable to provide user with invasive services in various applications of smart environments. In smart meeting rooms context-aware systems analyze user’ behavior based on multimodal sensor data and provide proactive services for meeting support, including active control PTZ (pan, tilt and zoom) cameras, microphone arrays, context dependent automatic archiving and web-transmission of meeting data at the interaction. History of interaction sessions between a user and a service is used for knowledge accumulation in order to forecast user behavior during the next visit. The user preference model based on audiovisual data recorded during interaction and statistics of his/her speech activity, requests, movement trajectories and other parameters was implemented for the developed mobile information robot and smart meeting room.
Recommender Student Productivity Management System
Abstract: A sufficient amount of studies worldwide prove an interrelation linking students’ learning productivity, emotional and psychological state to physiological parameters. Emotional states and the interest in learning affect learning productivity, while physiological parameters demonstrate such changes. Since the research by the authors of the present article confirmed these interdependencies, a Recommender System to Analyse Student’s Learning Productivity (Recommender System hereafter) has been developed. The Recommender System determines the level of emotional state and learning productivity integrally by employing physiological techniques. This Recommender System uses the Maslow’s Hierarchy of Needs and the database of the best global practices to come up with recommendations for students how to improve their learning efficiency. Worldwide research includes various scientists who conducted in-depth studies on the different and very important areas in the field. We did not find any systems which would take physiological parameters of students, analyse their learning efficiency and, in turn, provide recommendations. Maslow’s Pyramid Tacit (abilities, skills, experience, ideals, emotions, values, learning culture, beliefs, images, perspectives) and Explicit (theoretical approach, problem solving, etc.) Knowledge Tables, based both on the developed Maslow’s Knowledge Tables and on the models from the Model Base, provides recommendations on how to improve a student’s learning productivity.
Pavel Surynek and Ivana Luksova. Automated Classification of Bitmap Images Using Decision Trees
Abstract: The paper addresses the design of a method for automated classification of bitmap images into classes described by the user in natural language. Examples of such naturally defined classes are images depicting buildings, landscape, artistic images, etc. The proposed classification method is based on the extraction of suitable attributes from a bitmap image such as contrast, histogram, the occurrence of straight lines, etc. Extracted attributes are subsequently processed by a decision tree which has been trained in advance. A performed experimental evaluation with 5 classification classes showed that the proposed method has the accuracy of 75%-85%. The design of the method is general enough to allow the extension of the set of classification classes as well as the number of extracted attributes to increase the accuracy of classification.
Mahdi Naser-Moghadasi and Ataollah Arvani. A Model free apparoch based on Genetic Programming for the NAO humanoid robot
Abstract: Recently, the autonomous mobile robotics attracts accelerating interest of AI and Robotic researchers. Its applications are in both industry and academia. An autonomous robot system needs high performance of mechanical components and control software. Most challenging problems of all autonomous robots are robots that move using legs instead of wheels. The humanoid robot is more challenging than wheeled robotic because the dynamic stability of humanoids needs to be well maintained while the robots are walking and performing other tasks. These types of robot usually have many degrees of freedom and it creates new problem in control and navigation. Since classical methods could not find an exact solution for robot control in a timely manner, new techniques were presented. A relatively new and promising area for control of autonomous agents is evolutionary algorithms.
Maria Chondrogianni. Identifying the user’s intentions: basic illocutions in Modern Greek
Abstract: This paper presents a comprehensive classification of basic illocutions in Modern Greek, extracted following the linguistic choices speakers make when they formulate an utterance, provided such choices form part of a language’s grammar.  Our approach lies on the interface between Morphosyntax, Pragmatics and Phonology and allows for basic illocutions to be established depending on the particular verb mood, particle, number, person, aspect and segmental marker, as well as the prosodic contour used when an utterance is realized.  Our results show that Indicative uses, for example, are mostly associated with propositional illocutions, consisting of Declarative uses, including assertions, miratives, and assertions in disguise; interrogative uses, including polar and content interrogatives; and behavioural illocutions i.e. exhortations (expressed in first person plural only).  Secondary sentence types, (involving additional segmental marking) include Requests for Confirmation, Wondering, Expression of Uncertainty and Proffer. Such a theoretical approach can have a direct impact on applications involving Human-Computer Interaction, including intention-based dialogue systems’ modeling, natural language interfaces to DBs and Intelligent Agents as well as Belief Desire and Intention systems, which require the computer to be able to interpret what a user’s objective (intention) is, so that the users’ needs can be best served.
Helio Cavalcante, Leonardo Filipe Batista Silva De Carvalho, Roberta Lopes and Fábio Paraguaçu. A MODEL OF DECISION-MAKING BASED ON THE THEORY OF PERSUASION USED IN MMORPGs
Abstract: From a videogame perspective, the decision-making is a crucial activity that takes place at all times at different levels of perception. In addition, it also has influence over the gamer performance, which is a particularly important fact for RPGs as these games can act as tools to increase the improvement of the proximal development zones of the involved players. Therefore, as the RPG has an inherent cooperative trait that aids to stimulate socialization, interaction and the improvement of communication, in addition to involving the players in a kind of plot that favors the decision-making process, it was considered as an interesting testbed for the application of a model that was built using of a Petri Net coupled with concepts extract from the Game Theory and the Reciprocity concept from the Theory of Persuasion.
Ramy Eskander, Amin Shoukry and Saleh El-Shehaby. An Automatic Tagger for Diacritizing and Analyzing Arabic Text
Abstract: Arabic language is a language that does not lend itself, easily, to automatic processing. This is caused by many issues such as the affixation system in Arabic, the omission of disambiguating short vowels, and the diacritization system which may be absent from the text. Moreover, Arabic letters are usually written in different forms according to their positions in a word. Accordingly, an Arabic word has about ten possible morphological solutions (analyses); with only one being the correct solution; depending on the context of the word. Learning is required to understand context. In this work, an SVM-based Arabic automatic tagger is implemented. The tagger acts as an automatic diacritizer, tokenizer and morphological analyzer. The tagger is trained on a corpus whose Buckwalter morphological analysis is known and depends on a set of SVMs (Support Vector Machines) for classification. Each classifier is trained, separately, on a single morphological feature out of 13 features; which represent an extension of the set (of 10 features) provided by the Buckwalter analyser. This divide-and-conquer technique has proved to be efficient. A best-match technique is used to pick the correct Buckwalter morphological solution consistent with the output provided by the trained classifiers. The tagging results are promising and competitive relative to existing systems. The authors plan to extend it to domain specific Arabic understanding tasks.
Do-Thanh Sang, Dong-Chul Park, Dong-Min Woo and Yunsik Lee. Centroid Neural Network with Simulated Annealing  and Its Application to Color Image Segmentation
Abstract: Centroid Neural Network (CNN) with simulated annealing is proposed and applied to a color image segmentation problem in this paper. CNN is essentially an unsupervised competitive neural network scheme and  is a crucial algorithm to diminish the empirical process of parameter adjustment required in many unsupervised competitive learning algorithms including Self-Organizing Map. In order to achieve lower energy level during its training stage further, a supervised learning concept, called simulated annealing, is adopted. As a result, the final energy level of CNN with simulated annealing (CNN-SA)  can be much lower than that of the original Centroid Neural Network. The proposed CNN-SA algorithm is applied to a  color image segmentation problem. The experimental results show that the proposed CNN-SA can yield  favorable segmentation results when compared with other conventional algorithms.
Ildar Batyrshin, Igor Bulgakov, Cesar Huitron and Ana Luisa Hernández Martínez. Data mining and visualization of oilfields in VMD-Petro
Abstract: The system of visualization and data mining of oilfield VMD-Petro developed in Mexican Petroleum Institute is described. This system contains several modules: data base, administrator of data base, 3D visualizator of oilfields, module of dynamic maps, visual processor of time series and time series data miner. The system developed can serve as a working place of a petroleum engineer for monitoring and analysis of dynamics of oilfield. The functions and properties of the system modules are described.

# Special session: CPS

Hiram Ponce Espinosa and Miguel Bravo Acosta. A Novel Design Model Based on Genetic Algorithms
Abstract: The design process falls into the problem-solving method in which it involves an intensive way of searching and matching that generates a large solution space. In that sense and inspired on the Kryssanov’s design model, this paper introduces a novel design model based on genetic algorithms that aims to achieve an optimal solution taking into account the design objectives, the conceptual design, the functional design, and the structural design. In order to validate the proposed model, a proposed system programmed on the LabVIEW platform was developed and tested. A case study of designing chairs was considered. Experimental results show that the proposed model can achieve high qualities to perform well-designs and particularly, the design of chairs can be reached faster than traditional design models or the Kryssanov’s model.
Jorge Armendariz, Chidentree Treesatayapun and Arturo Baltazar. Force Feedback Controller Based on Multi Input Fuzzy Rules Emulated Networks and Hertzian Contact with Ultrasound
Abstract: In robotics, manipulation of fragile objects requires precise determination of instantaneous contact and forces with minimum intrusion. In this paper, the design of a novel force controller which combines a neuro-fuzzy algorithm with a new contact ultrasonic probe is proposed. The adaptive network called Multi Input Fuzzy Rules Emulated Network (MIFREN), with its simple structure, transfers the human knowledge about the robotic manipulator and the sensor in the form of fuzzy IF-THEN rules. The force sensor is integrated in a robotic system of four degrees of freedom with the MIFREN force control. The contact probe is equipped with a 1 MHz frequency ultrasonic transducer. The probe is constructed by a hemispherical head and an ultrasonic transducer located at the top of the head. The contact between the probe and the medium is modeled by Hertz theory. The experimental results showed that the proposed system exhibits high sensitivity to initial instantaneous contact and provides an estimation of reaction force. To test the system, experiments were conducted on samples with different surface mechanical properties. The results indicate that the proposed system is able to effectively control the contact force.
Model Prediction of Academic Performance for First Year Students
Abstract: The aim of this paper was to obtain a model to predict new students' academic performance taking into account socio-demographic and academic variables. The sample contained records of first semester students at a School of Engineering from a range of students’ generations. The data was divided into three groups: students who passed none or up to two courses (low), students who passed three or four courses (middle) and students who passed all five courses (high). By using data mining techniques, the Naïve Bayes classifier and the Rapidminer software, we obtained a model of almost 60% accuracy. This model was applied to predict the academic performance of the following generation. After checking the results of the predictions, 50% were classified as correct. However, we observed that, for students of certain engineering majors of high and low groups, the model’s accuracy was higher than 70%.
Mireya García-Vázquez, Alejandro Alvaro Ramírez-Acosta and Abril Fernanda Garcia-Ramirez. Impact of TEC techniques in video transmission with critical errors
Abstract: Transmission of videos in error prone environments such as mobile communications, may lead to video corruption or loss so that received visual quality is degraded. Therefore, it is essential to implement efficient and low complexity error concealment methods which allow to visually reducing the degradation cause by the missing information. The contribution of this work is to report the results of a comparative analysis of five temporal error concealment (TEC) techniques in critical situations, where the frames are corrupted with different errors pattern covering 50% of the image area. The results indicate, according to two similarity metrics (SAD-BM and SAD-OBM), that the ‘Best motion vector neighbor’ and ‘Zero motion vector’ are the best TEC options. These results could help in the development of a robust but simple TEC algorithm against several errors distribution. This could be valuable in mobile video applications in which user’s video requests are becoming more and more demanding and therefore significant loss of information could be present due to node congestion or excessive delay.
Static and Dynamic Semantics
Abstract: The paper is aiming at a contribution to the logic program updates research. A (dynamic) semantics of sequences of logic programs is presented. The semantics is closely  connected to the answer set semantics. We follow the construction of answer sets proposed by Dimopoulos and Torres. Solutions of three kinds of conflicts are added to the construction. We present also some postulates characterizing relations between static and dynamic semantics. Subsequently a comparison to other postulates for logic program updates is provided. Presented postulates are satisfied by our dynamic semantics.
Lucía Barrón Estrada, Ramón Zatarain Cabada, Paul Tamayo, Silvestre Tamayo, Carlos Reyes-García and Humberto Pérez Espinoza. A Learning Social Network with Multi-modal Affect
Abstract: Integrating teaching with students ‘emotions is to seek to optimize the learning of these students. This paper presents a learning system which combines different technologies like a learning social network or knowledge society, an authoring tool to produce intelligent tutoring systems and a system for emotion recognition. The recognition of affection or emotions is through modular neural networks which integrate the results of recognizing emotions in faces and voice. The system that recognizes emotions is integrated with the intelligent tutoring systems which in turn are integrated into the learning social network.
Guillermo De Ita, Yolanda Moyao and Luis Altamirano. Applying Max2SAT to Efficient Belief Revision
Abstract: Belief revision is a NP-hard problem even when the Knowledge Base (K) is formed by Horn clauses.  In this paper, we present a new belief revision operator that its performance can be done efficiently if the initial knowledge base is a two conjunctive form (conjunction of unit or binary clauses, denoted as 2-CF).  Such revision operator *I  on a new formula F, denoted as (K *I F),  relies heavily on the selected model I of F. If the model I can be computed in polynomial time (e.g. if F is Horn or a 2-CF), then the complete belief revision process has a polynomial time complexity.  However, if F has not restrictions, our proposal request to apply only one NP oracle call (that is, the necessary call to compute a model of F) that involves an exponential time over the length of F. Afterwards, to compute  (K *I F) is done in a polynomial time over the length of K. It is common to consider the length of K much longer than the length of F. Thus, our proposal of belief revision is in the complexity class P^{NP[1]}.
CreaDO – A Methodology to Create Domain Ontologies using Parameter-based Ontology Merging Techniques
Abstract: Nowadays, ontologies have become a key mechanism to represent the knowledge of a specific domain. Domain ontologies can be used for different purposes; one of them is the development of semantic search engines that obtain precise results by considering the meaning of Web content. The construction of these ontologies usually requires a large amount of effort and time to be completed. One way of reducing such effort and time is using a reuse approach. When ontologies overlapped, alignment techniques to merge such ontologies can be used. However, the result of the ontology merging activity can be a very large ontology, which makes complicated the understanding and use of the merged ontology by final users. To overcome this problem, this paper describes a methodology, called CreaDO, to semi-automatically create domain ontologies. CreaDO focuses on performing a parameter-based ontology merge that allows the creation of a domain ontology that contains only the relevant information for a given specific purpose.
Jesus S. Cepeda, Luiz Chaimowicz, Rogelio Soto and José L. Gordillo. Towards a Service-Oriented Architecture for Teams of Heterogeneous Autonomous Robots
Abstract: Developing an infrastructure for efficiently coordinating a group of autonomous robots has become a challenging need in robotics community. In recent years, the use of teams of coordinated mobile robots has been explored in several application fields such as military, exploration, surveillance, and search and rescue missions. For such fields, the use of multiple robots enable for robustness at mission accomplishment. In this paper we present a service-oriented, distributed architecture for coordinating a set of mobile robots on the way to a common goal. The main design aspects concern the ease of extendibility, scalability, re-usability and integration of rapidly changing robotic hardware and software, and the support of different application domains rather than limiting to specific tasks' requirements and algorithms. We integrate state of the art ideas such as Service-Oriented Robotics in order to achieve novel solutions. We demonstrate working cooperative autonomous operations using multiple robots with time-suitable communications. This work is an early phase of our ultimate goal, which is to have multiple heterogeneous autonomous robots forming an intelligent system that can be able to deal with highly dynamic and challenging environments such as a first responders team in search and rescue missions.
Alfonso Alba, Ruth Mariela Aguilar-Ponce, Javier Flavio Vigueras-Gómez and Edgar Arce-Santana. Phase correlation based image alignment with subpixel accuracy
Abstract: The phase correlation method is a well-known image alignment technique with broad applications in medical image processing, image stitching, and computer vision. This method relies on estimating the maximum of the phase-only correlation (POC) function, which is defined as the inverse Fourier transform of the normalized cross-spectrum between two images. The coordinates of the maximum correspond to the translation between the two images. One of the main drawbacks of this method, in its basic form, is that the location of the maximum can only be obtained with integer accuracy. In this paper, we propose a new technique to estimate the location with subpixel accuracy, by minimizing the magnitude of the gradient of the POC function around a point near the maximum. We also present some experimental results where the proposed method shows an increased accuracy with respect to the classic method. Finally, we illustrate the application of the proposed algorithm to the rigid registration of digital images.
Miguel Romero, Rafael Lemuz, Irene Olaya Ayaquica Martínez and Griselda Saldaña. A Calibration Algorithm for Solar Tracking System
Abstract: Solar tracking systems that use cameras, lenses and mechanical elements usually calibrate each device independently. While this scheme allows modular systems characterization it has the disadvantage of using additional expensive calibration patterns. In this work, we present a low cost solar tracking platform integrating a real time image processing module based on  modern graphics cards. To avoid the use of additional components for system calibration we propose an algorithm to perform motion station-camera calibration using minimal data from the earth-sun geometry, decoder motor values and solar image locations. Thus, from a known set of measurements: angular values from motor decoder and sun image locations a linear system of equations allow to find the parameters that represent the transformation from pixel displacement to angular values.Results are presented to show the feasibility of our approach to circumvent camera lens distortion and mechanical instabilities of the low cost tracking system.
Manuel Hernandez. Event Calculus for Reasoning about Erlang Systems
Abstract: The event calculus is a first-order logical formalism for dealing with temporal reasoning concerns. Erlang is a functional programming language designed from the ground up for tackling distributed programming problems. In this work, it is shown how the event calculus allows to analyze some properties of Erlang programs.
Speech Recognition with Limited  Resources for Children and Adult  Speakers
Abstract: Children are natural users of speech technology. However, children speech has been proved to be harder than adults. This implies that in order to build a children speech recogniser that is comparable with an adult recogniser it would be necessary a larger amount of recordings. In this paper, we present our experiments building such recognisers but with a limited amount of resources, this is a corpus with few speakers and recordings. We explore the effects of combining both resources in different ways. We also show the performance of the recognisers in a real life scenario with spontaneous speech.
Juan Carlos Nieves and Mauricio Osorio. Studying Ideal Semantics via Logic Programming Semantics
Abstract: In this paper, we show that by using extension of the Well-founded Semantics which was defined in terms of rewriting systems, one can characterize ideal sets. We also show that these extensions of the well-founded semantics define argumentation
semantics with similar behaviour to the ideal semantics. On the other hand, we introduce a new logic programming semantics which is able to characterize the ideal set of an argumentation framework.
Deniz Angélica Ramos López, Giovanni Lizárraga Lizárraga and Miguel Gastón Cedillo Campos. An implementation of Fuzzy Logic for self-assessment security risk in exporting companies
Abstract: Ensuring security along the supply chain is fundamental to trade with other countries. Exporting companies should evaluate themselves to determine the level of risk associated with their departments and activities. While exporting firms show that their level of risk is under increasing trade opportunities. Knowing the areas of opportunity makes it possible to strengthen vulnerable areas and obtain safety certifications that allow them to trade with its major trading partners. The present work introduces a versatile implementation of fuzzy tools that allow self-assessment of risk of an exporting company. This fuzzy tool is easier to use then other tools in literature that performs the same analysis and can be translated to different classes of companies in the supply chain. It is compared to Analytical Hierarchy Process to verify the accuracy of its results.
Rocio A. Lizarraga-Morales, Raul E. Sanchez-Yanez and Victor Ayala-Ramirez. Visual Texture Classification Using Fuzzy Inference
Abstract: A fuzzy rule-based system for visual texture classification is presented. Fuzzy rule-based classifiers can be conveniently high interpretable for the end user. The proposed classification system is built in two phases: learning and recognition. In the learning phase, each texture class is modeled by a particular rule that includes linguistic terms obtained through a fuzzification of some visual features. For recognition purposes, an unknown texture image is subject to an inference process, and it is labeled as the class whose rule score is the highest. Experiments using a set of reference textures result on good classification rates.
Iván González and Leonardo Garrido. Spatial Distribution Through Swarm Behavior on a Military Group in the Starcraft Video Game
Abstract: New levels of challenge are required for real-time strategy games because of their learning curve. Usually this learning curve has a limit after which a player is able to win the most of the matches. In this research a method based on swarm intelligence is proposed to control a military swarm in the Starcraft environment in order to achieve a better performance against the built-in artificial intelligence of the game. This method provides a way to form the units in such a way that they reach a suitable position to attack an enemy group conformed by elements of the same type. The results obtained in the experiments prove that is possible to implement this method and achieve a better performance than the greedy algorithm of the built-in AI. Finally, since the method is based in nature it is scalable, decentralized and robust as shown in the results.
Juan J. Flores, Mario Graff and Marco Tulio Arreola. Lessons Learned in Evolving Artificial Neural Networks for Classification and Forecasting
Abstract: Evolutionary Computation has been used to automatically design Artificial Neural Networks (ANNs) capable of solving hard classification and forecasting problems. The evolutionary process contains a lot of information that may help in the design of future ANNs for similar problems. This contribution is focused on eliciting knowledge from the evolved ANNs with the goal of highlighting those parameters, and/or characteristics that impact the most to the performance of these networks. ANNs are optimized using a Compact Genetic Algorithm. We study the use of ANNs on two classes of problems, namely, classification and time series forecasting. We experimentally show that this information is enough to produce competitive ANNs for the problems tested. As a consequence, one is able to reduce the size of the search space and accelerate the search without sacrificing the quality of the networks.
Ana M. Martinez-Enriquez, Gonzalo Escalada-Imaz, Jorge Buenabad-Chavez and Ulises Revilla-Duarte. Automatic Discovering Web Services with Knowledge Based System and Workflows
Abstract: This paper describes an automatic Web services (WS) composition combining Knowledge Based Systems and a set of workflows, in order to resolve distributed problems.
A workflow represents the required processes for resolving problems/applications proposed by a non computer user.The workflows handle the numerical information of the problem. The WS composition problem is resolved by using Backward Logical inferences that deduce whether or not the WS composition exists. We implemented a backward inference algorithm which is bounded by $O(n)$, $n$ being the size of the KBS. Besides, our Inference Engine, proceeding in a top-down way, scans a small reduced search space compared to that of the forward chaining algorithms. Our experimental example deals with statistical analysis and its application to the solution in different domains. To proof that our solution is feasible for real cases, we validate our approach using Taverna that is an open free software.
Andrés Espinal Jiménez, Marco Sotelo-Figueroa, Jorge Alberto Soria-Alcaraz, Manuel Ornelas Rodríguez, Hector Puga Soberanes, Martin Carpio and J. L. Rico. Comparison of PSO and DE for training neural networks
Abstract: The amount of computational resources required for neural network training by means of classical techniques can be prohibitive in real-time applications. A good training phase is nedeed for a high performance of a neural network. There are several metaheuristics techniques that obtains a good results in reasonable time for problems in continuous search spaces. This paper explore the idea that it is possible to use these heuristic techniques for training neural networks. We present the proposed idea under the PSO and DE techniques for a hidden-layer  neural network under several well-known instances. Finally the Wilcoxon singned-rank test is used to discern the quality of  neural networks training phase  by means of these proposed Metaheuristics.
Hector Rodriguez Rangel, Art Farley and Juan J. Flores. Qualitative representation of a bifurcation diagram from quantitative data
Abstract: A bifurcation diagram describes the behavior of a dynamical system. Qualitative reasoning could be used to predict the behavior of the system based upon its bifurcation diagram, but qualitative reasoning works with a qualitative representation.  In this paper, we present an algorithm that segments a given quantitative bifurcation diagram into monotonic segments having the same qualitative behavior. These monotonic segments are then interconnected into a network that defines a qualitative representation of the bifurcation diagram.  This network can be used to simulate the behavior of the system from a given initial situation and control sequence.  We present several examples to demonstrate the algorithm and show the representations it creates.
Victor Ayala-Ramirez and Angel Noe Martinez-Gonzalez. Face Detection Using Neural Networks
Abstract: Our proposal here is to use neural networks in the development of a face detection system capable of operating in real time. The system performs a guided face search on interest regions exhibiting human skin color properties. These properties
are detected in a pixel by pixel basis. The proposed system can be used as a module of face recognition systems, video surveillance systems, access control systems, for example.
Ivo Humberto Pineda Torres, María Somodevilla García, Mario Rossainz López, Sergio M. Dorantes and José De Jesús Lavalle-Martínez. Reference Regions for Image Classification
Abstract: Due to the amount of visual information that currently exists, there is a need to classify it. In this paper we present an alternative method for image categorization according to their texture content using Gabor Filters and Support Vector Machine (SVM). To perform the image classification we rely on filtering techniques for feature extraction mixed with statistical learning techniques to perform the data separation. The experiments were carried out by taking a set of images containing coastal beach scenes and a set of images containing city scenes. A feature vector is obtained from applying a bank of Gabor Filters to the input images; the output feature space is then used as an input to the SVM Classifier. The Support Vector Machine is responsible for learning a model that is capable of separating the sets of input images. Experimental results demonstrate the effectiveness of the proposed dual method by getting the error classification rate to near 9%.
Blanca Lorena Villarreal Guerra and Jose Luis Gordillo Moscoso. Directional Aptitude Analysis  in Odor Source  Localization Techniques for Rescue Robots Applications
Abstract: Olfaction is an interesting new challenging area for intelligent systems to be developed and applied in rescue robots applications. The difficulties of developing chemical sensors are that the chemical reaction changes the sensor and there is always some interference present. The use of a sniffing robot following the smell of precise odors is one way to increase the efficiency and the fastness of a multi-robot team in a disaster area. The most important task of a sniffing robot in a rescue application is the odor source localization, which inspired on nature, requires the capacity of directionality. The intention of this document is to prove that the diffusion, advection, and gradient behaviors are actually present in a semi-controlled environment with an odor source simulating a continuous and relatively constant gas leak.
Juergen Landes and Ricardo Buettner. Job allocation in a temporary employment agency via multi-dimensional price VCG auctions using a multi agent system
Abstract: We consider the problem of how a temporary employment agency allocates temporary agency workers to jobs. To address this problem we extend the Vickrey-Clarke-Groves auction to a mechanism in which agents make bids in a multi-dimensional contract space. That is agents can specify how much a work contract consisting of several components such as wage per hour, days of leave, overtime premiums and hours of work is valued. We show that the mechanism we develop satisfies Incentive Compatibility and Pareto Efficiency.
Application of artificial neural networks to predict the selling price in the real estate valuation process
Abstract: An artificial neural networks (ANN) approach was applied to develop a mathematic model which predicts the sales price of residential properties. The study is based on evaluation of sales of homes in Casablanca, Morocco Kingdom. North of Africa. A feedforward network with one hidden layer was trained using original  set of residential property valuation database. The ANN was obtained by 148  sets of input-output patterns applying backpropagation algorithm. For the networks, the Levenberg-Marquardt learning algorithms, the hyperbolic tangent sigmoid transfer function and the linear transfer function were used. The best fitting training data set was obtained from an ANN architecture composed by five neurons in the hidden layer, which made possible to predict the sales price of homes. The model gave good predictions with high correlation coefficient (R2=0.952). Also, the validation of the data set simulations was in good agreement with the original data. It is suggested that the new ANN model could be used as a tool for the reliable prediction of selling price values.
Jorge Kinoshita. Semi-Global Alignment of Lines and Columns for Robust Table Extraction from periodic PDF Documents
Abstract: In a PDF document there is no meta information that describes a table structure; therefore, it is difficult to extract data from a table in PDF documents, mainly when the table is not isolated in a single column page. The extraction is harder when the table is in a multi-column page or it appears beside another table. Given a table tab1 (manually extracted from the document D1.pdf, ex: a balance sheet of 1998)  and a PDF document D2.pdf, (ex: the annual report of 1999) we propose to automatically extract a table tab2 (ex: the balance sheet of 1999) inside D2.pdf through a dual (lines and columns) semi-global alignment between tab1 and D2.pdf, even when tab2 is in difficult places. This is specially useful for extracting data from financial reports because a table (e.g. a balance sheet) in a given year is very similar to the corresponding one in the next year. For example: we can easily extract the "Total Current Assets" in 1999 through the dual semi-global alignment the 1998 balance sheet (tab1) and the 1999 financial report (D2.pdf). To our knowledge, this paper is the first to propose data extraction from tables in PDF documents through a dual semi-global alignment.
Angel Kuri-Morales and Jose Ignacio Lopez. A Novel Method to Determine a Robot's Position Based on Machine Learning Strategies
Abstract: An open problem in robotics is the one dealing with the way a mobile robot locates itself inside a specific area. The problem itself is vital for the robot to correctly achieve its goals. There are several ways to approach this problem, for example, robot localization using landmarks, calculation of the robot’s position based on the distance it has covered, etc. Many of these solutions imply the use of active sensors in the robot to calculate a distance or notice a landmark. However, there is a solution which has not been explored and is the main topic of this paper. In essence the solution we tested has to do with the possibility that the robot can determine its own position at any time using only a single sensor, and a reduced database. This database contains all the information needed to match what the robot is sensing with its spatial position. In order for the method to be practically implementable we reduced the number of necessary matches by defining a subset of the original database images. There are two issues which have to be solved in order to implement such solution: a) the number of elements in every subset of the matching images and b) the absolute positions of each of these elements. Once these are determined, the matching process is very fast and ensures the adequate identification of the robot’s position with- out errors. However, the two goals we just mentioned impose conflicting optimization goals. On the one hand we seek for the largest subset so that position identification is accurate. On the other we wish this subset to be as small as possible so that the online processing is as fast as possi- ble. These conditions constitute a multi-objective optimization problem. To solve it we used a Multi Objective Genetic Algorithm (MOGA) which minimizes the number of pixels required by the robot to identify an im- age. To test the validity of this approach we also solved this problem using a statistical methodology which solves problem (a) and a random mutation Hill Climber to solve problem (b).
Edgar A. Castro-Astengo, Felix F. González-Navarro and Brenda L. Flores-Ríos. Microarray Gene Subset Selection in Amyotrophic Lateral Sclerosis Classification
Abstract: Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease causing a progressive loss of motor neurons. The disease prevalence is 5 per 100,000 people. There is no cure and it leads generally to death from respiratory failure in approximately 3-5 years after the first symptoms. The exact causes of the disease are still unknown, however, almost 20% of the known cases have shown gene mutations. The use of gene expression analysis is a powerful tool to discover the most relevant genes in cellular processes, but the high dimensionality of the data makes the feature selection a challenging task. Using a filter method combined with machine learning algorithms, an ALS data set is explored. Bootstrap resampling is used as a way to achieve stability in the whole process.
Guillermo Kitazawa, Jose V Abellan-Nebot and Hector R. Siller Carrillo. Comparison of Analytical and Artificial Intelligent Models for Quality Assurance in Micro-milling Operations
Abstract: Despite the high performance of artificial intelligent (AI) models for part quality prediction and control in machining operations, only well-known analytical models are commonly used in industry. This paper compares different analytical models with AI models for quality assurance in the fabrication of fluidic channels in micro-milling operations. The comparison of both types of models is conducted in terms of accuracy, ability for optimizing the operation ensuring part quality, and prediction robustness from environmental changes. The results show the main differences between these two types of models and reflect their advantages and drawbacks according to the analyzed application.
Jesus Del Bosque, Christian Hassard and José Luis Gordillo. Velocity control of an electric vehicle over a CAN network
Abstract: Distributed control applications require a reliable network for information exchange. The network discussed on this paper uses CAN bus as a means of communication to control the speed of an electric vehicle. National Instruments Programmable Automation Controller, CompactRIO, based on LabVIEW programming environment is used to execute one of two different speed control algorithms (PID or fuzzy logic) to test the performance of these in a network, and eventually acts as a human-machine interface via a personal computer. The proposed network provides robustness in terms of communication and opens the possibility of expansion to develop complete control architecture in order to successfully build a fully autonomous vehicle.
Daniel Flores and Jorge Cervantes. Rank Based Evolution of Real Parameters on Noisy Fitness Functions: Evolving a Robot Neurocontroller
Abstract: We present a Rank Based Evolutionary Algorithm for representations in the real numbers. We introduce a new Rank Based Selection operator and a new variation of a Rank Based Mutation that act in a representation using real numbers. The problem in which we tested the algorithm was to evolve a fixed topology feed forward artificial neural network that is used as a controller for a robot. In order to be successful, the robot must be able to use both proximity sensors and video input but there is some level of noise in them. The test results show how the proposed operators are suitable for this kind of problems where the fitness landscape is noisy and where little else is known about it.
Simulation of the cardiac cycle based in the cable equation by Morris-Lecar for the diagnostic of cardiac arrhythmia.
Abstract: It is presented a program which calculates the cardiac cycle in a heart´s wall from the cable equation by Morris-Lecar [1]. The entrances in the equation are ionic concentrations [2,3] of calcium-potasium of a patient with cardiac problems. The hypothesis is to determine if the patient will have arrhythmia; the simulation of the program [4] is based on the initial myoelectrical voltage in repose, and the capacitance, admitance, and the calcium-potasium bomb which act along the cellular membrane. It is determinated in simulation, a prediction of the posible arrhythmia in hyperpotassiumia that the patient had, which was proved with the myoelectric study done in the hospital. This program could be used in patients with cardiac problems in order to find a posible arrhythmia.
Luis D. Lara and Rafael Lemuz. Algebraic Models for Accurate LFS Reconstruction
Abstract: In this paper, we analyze four algebraic models for light fall off representation. The algebraic models are estimated using fitting techniques of no linear models. The alternative light attenuation models are then evaluated in a $3D$ reconstruction method. The comparative analysis of the $3D$ reconstruction error is presented and shows that the algebraic models are more accurate. The advantages and disadvantages of the alternative models are discussed and compared with the traditional representations of light fall off known as the inverse square law.
Diego Uribe. Presence or Semantic Information in Sentiment Classification?
Abstract: This paper analyses the implications in the use of a content vector based on frequency and semantic information. In our phrase pattern-based method, we automatically construct semantic lexicons to determine the semantic orientation of each feature, that is, the degree of subjectivity associated with the n-gram. Using two different datasets with Bayesian learning methods, our results show that it is possible to maintain a state-of-the art classification accuracy.
A Nominal Filter for Web Search Snippets: Using the Web to Identify Members of Latin America's Highly Qualified Diaspora
Abstract: This paper presents efforts aimed at using Natural Language Engineering (NLE) techniques to solve the practical problem of gathering data for evaluating the impact of talent mobility on the development of three Latin American countries: Uruguay, Argentina and Colombia. An NLE system is under construction which is designed to help social scientists analyse this impact through an on-going process of interpretation. The system {\small UNOPORUNO} presented here seeks to assist social scientists in carrying out this type of qualitative research in three ways. To help in corpus construction, the system enriches Web People Search queries with personal identity keywords (PIKs). As a result, the Web search snippets retrieved using these method delimit what, in theory, is a corpus of semantically rich data for qualitative research. Then, the coherence of this data is automatically verified by applying nominal filters to eliminate those snippets which do not respect valid variations of personal names. Finally, the filtered results are ordered and presented to social scientists in a way which allows them to decide if they want to include the new data in their expanding corpus or not. Data gathering and selection is a fundamental phase in qualitative research. Our goal is to produce a computer supported infrastructure for doing this type of sociological research using data from the Web.
Erika Velazquez-Garcia, Ivan Lopez-Arevalo and Victor Sosa-Sosa. Semantic Graph-based Approach for Document Search
Abstract: Actual document search engines base searches on the file name or content, which means that the word or part of the word to search search must exactly match. This article proposes a semantic graph-based method for document search. The approach allows to organize, search, and display documents or groups of documents. Groups are formed according to topics contained in documents.
Estimation of anisotropic water diffusion indexes on axon bundle crossings
Abstract: The estimation of principal diffusion directions in brain white matter has been extensively studied by processing Diffusion Weighted Magnetic Resonance Images. Those studies present competitive results for both, a single diffusion orientation, as well as for
multiple diffusion orientations at voxels where the axon fibers cross or split. However, in the best of our knowledge, all the available methods are unable to estimate the anisotropic indexes in the multiple diffusion direction case. Those anisotropic indexes are used to determine properties of the axon packs in the brain, thus are related to the diagnosis of brain diseases. In this paper we propose a new method based on the selection of Diffusion Basis Functions which is capable to detect the anisotropic indexes. Our approach is composed of two steps: the first stage estimates the diffusion orientation based on a raw anisotropic model, the second stage estimates the anisotropic indexes by setting a high resolution Diffusion Basis built from statistical analysis of parallel and radial diffusion coefficients on the brain white matter. Our experiments are presented on synthetic diffusion weighted data.
Driss Sadoun, Catherine Dubois, Yacine Ghamri-Doudane and Brigitte Grau. An ontology for the conceptualization of an intelligent environment and its operation
Abstract: Nowadays sensors and actuators are increasingly used in different spaces, creating intelligent environment. The aim of this article is to describe a conceptualization of an intelligent environment and its operation, in order to check its consistency and conformity. This conceptualization is done through an ontology representing the domain knowledge, whose elements will be instantiated from natural language texts describing the physical configuration of an intelligent environment and a scenario describing the operation desired by the user of the environment. We chose OWL to represent formally our environment augmented with SWRL rules to represent the dynamic aspect of the operation system and SQWRL to query our conceptual model. We show how consistency and conformity are checked thanks to this formalism.
Abstract: Recitation of the Holy book of Muslims, The Holy Quran, is a regions duty and hence is done with utmost care such that no mistakes are made while reading it. These mistakes may include the wrong utterance of words, misreading words, and punctuation and pronunciation mistakes. Believers of Islam are spread all over the world and hence there also can be a difference in accent. To avoid this, tajweed rules are implemented to ensure that the utterance is done according to some rules. These rules ensure that there is no variance in the recitation of the Holy book for different reciters. For further improvements, the people are encouraged to memorize the whole book and the person who does that is called a Hafiz. Having known the whole book by heart down to every word with Tajweed rules he/she can a guide to correct other learners who intend to learn by listening to learners and correcting their recitation. But the availability of a Hafiz can be a problem where Islam is not a dominant religion. Furthermore the competency and level of expertise are of epic importance. To get around this problem we have designed and developed a system E-Hafiz. E-Hafiz is based on an idea that Tajweed rules are used to train learners how to recite Quran. To achieve this we used on Mel-Frequency Cepstral Coefficient (MFCC) technique. We extract the features of recorded voices using MFCC and compared with experts’ voices stored in database. Any mismatch on word level is pointed out and ask the user to correct it.
Julio Castillo, Marina Cardenas, Juan Carlos Vazquez and Maria Del Carmen Rojas. Risk Health Housing (RHH): Model Health Risks of Urban Housing using Artificial Neural Networks
Abstract: This paper presents an approach based on complex thought, collective health, and global vulnerability from a holistic conception with the aims of determining the risk of health caused by precarious urban dwellings. We proposed a computational model based on artificial neural network that estimates the vulnerability of the risk of human dwellings. The model is being used on several countries in South America. Preliminary conclusions show that tests field is matching with the expert opinion when validating the methodology for measuring the risk to the health and the software system associated.

# Special session: RCS

José Martín Castro-Manzano, Axel Barceló-Aspeitia and Alejandro Guerra-Hernández. Consistency and Soundness for a Defeasible Logic of Intention
Abstract: On one hand, defeasible logics have been mainly developed to reason about beliefs but have been barely used to reason about temporal structures; on the other hand, intentional logics have been mostly used to reason about intentional states and temporal behavior but most of them are monotonic. So, a defeasible temporal logic that deals with the non-monotonicity of intentions while taking care of temporal structures has not been developed yet. In this work we propose a defeasible temporal
logic with the help of some temporal semantics and a non-monotonic framework in order to model intentional reasoning. We also show the consistency and soundness of the system.
Alfredo Toriz Palacios, Abraham Sánchez López, René Zapata and María Auxilio Osorio Lama. Mobile robot SPLAM for robust navigation
Abstract: This paper describes a simultaneous planning localization and mapping (SPLAM) methodology, where the robot explores the environment efficiently and also considers the requisites of the simultaneous localization and mapping algorithm. The method is based on the randomized incremental generation of a data structure called Sensor-based Random Tree, which represents a roadmap of the explored area with an associated safe region. A continuous localization procedure based on B-Splines features of the safe region is integrated in the scheme.
Estimation of distribution algorithms for symbolic regression of industrial processes
Abstract: Optimization of industrial processes is a desirable task in engineering; however, this is made by trail and error and sometimes using a design of experiment or considering only measured data is not easy. Symbolic regression alpha beta is a novel and easy way to get mathematical models of processes using only measured or experimental data. The equations can be used in optimization as an objective function of any evolutionary algorithms like estimation of distribution algorithms. Two cases are presented using measured data of real processes where it is desirable an optimization of specific responses.
Jean Marie Vianney Kinani, Jesus Salvador Velazquez Gonzalez, Javier Fransisco Gallegos Funes and Alberto Jorge Rosales Silva. Auto-Diagnosis of brain tumors based on multi-parameters analysis
Abstract: This project addresses the task of automatically diagnosing brain tumors in magnetic resonance images. The motives behind this are potential applications in neurological hospitals where the algorithm will play an important role in diagnosing and assessing tumors growth, assessing treatment response, enhancing computer-assisted surgery, planning radiation therapy, and constructing tumor growth models. The algorithm as a whole forms an image processing pipeline, consisting of noise reduction, contrast enhancement, segmentation, feature extraction and decision-making. The key advantage of this framework is the simultaneous use of features computed from the region of interest’s intensity properties in order to identify the unhealthy tissues in the MRI image. So far the region of interest has been identified and its features computed, and the current stage is focusing on how to automatically coalesce these features in order to recognize the tumors.
Santiago Omar Caballero Morales and Edgar De Los Santos Ramírez. Native Speaker Dependent System for the Development of a Multi-User ASR-Training System for the Mixtec Language
Abstract: The Mixtec Language is one of the main native languages in Mexico, and is present mainly in the regions of Oaxaca and Guerrero. Due to urbanization, discrimination, and limited attempts to promote the culture, the native languages are disappearing. Most of the information available about these languages (and their variations) is in written form, and while there is speech data available for listening and pronunciation practicing, a multimedia tool that incorporates both, speech and written representation, could improve the learning of the languages for non-native speakers, thus contributing to their preservation. In this paper we present some advances towards the development of a Multi-User Automatic Speech Recognition (ASR) Training system for one variation of the Mixtec Language that could be used for the design of speech communication, translation, and learning interfaces for both, native and nonnative speakers. The methodology and proposed implementation, which consisted of a native Speaker - Dependent (SD) ASR system integrated with an adaptation technique, showed recognition accuracies over 90% and 85% when tested by a male and a female non-native speakers respectively.
Alejandro Alvaro Ramírez-Acosta, Mireya García-Vázquez and Gustavo Vidal-González. Porting of MPEG-4 ASP codec for ARM9 processor in the DVEVM355 target architecture
Abstract: Nowadays, the complexity of embedded systems has increased dramatically, making design process more complex and time consumer. This situation has caused significant delays in introducing new products to market and serious economic problems for several companies. Thus, the integrated circuit manufacturers have revised; redesigned or abandoned the traditional paradigms of the design of electronic circuits and systems. This effort allows the emergence of applications based on design platforms (platform-based design, PBD). This paper describes the implementation of a MPEG-4 Advanced Simple Profile video codec prototype based on Xvid software, which is ported to the ARM9 platform-based architecture of the evaluation development platform DVEVM355 of Texas Instruments (TI). Our codec implementation is under eXpressDSP Digital Media (xDM) standard of TI. The importance of our work is that the implemented codec based on xDM standard can be integrated with other software to build a multimedia system based on either DVEMs platform in a very short time. The experimental evaluation of our MPEG-4 ASP-Xvid codec’s performance demonstrate  high performance and efficiency compared to  the MPEG-4 Simple Profile video codec of TI.
Adriana Peña Pérez Negrón, Raúl Antonio Aguilar Vera and Elsa Estrada Guzmán. Modeling an Agent for Intelligent Tutoring in 3D CSCL based on Nonverbal Communication
Abstract: Abstract. During collaboration, people’s nonverbal involvement mainly intention is the achievement of the task at hand. While in 3D Collaborative Virtual Environments (CVE) the users’ graphical representation −their avatars, are usually able to display some nonverbal communication (NVC) like gazing or pointing, in such a way that their NVC cues could be the means to understand their collaborative interaction; its automatic interpretation in turn may provide a virtual tutor with the tools to support collaboration within a learning scenario. In order to model a virtual tutor for 3D collaborative learning environments, based on literature review, the NVC cues to be collected; how to relate them to indicators of collaborative learning as participation or involvement; and/or to task stages (i.e. planning, implementing an evaluating) are here discussed. On this context, results from collecting NVC cues in an experimental application during the accomplishment of a task are then analyzed.
Damny Magdaleno, Leticia Arco, Rafael Bello, Michel Artiles, Juan Manuel Fernández, Juan Huete and Ivett E. Fuentes. New textual representation based on structure and content
Abstract: The effectiveness of documents representation is directly related with how well can be compared their contents with another. When representing XML documents it is important not only its content, the structure can be exploited in tasks of text mining. Unfortunately, most XML documents representations do not consider both components. In this paper is presented a new form of textual XML documents representation using their structure and contents. The main results are: the new form of textual representation, following the criterion that depending on the location in which is presented a term within a document will have more or less importance in deciding how relevant this is in the document; it was joined to GARLucene software, increasing its potential for handling XML documents; the clustering, based on differential Betweenness of 25 textual collections represented with the new proposal, yielded better results than when they were represented with classic VSM.
Jorge Alberto Soria-Alcaraz, Martin Carpio, Hector Puga Soberanes and Marco Sotelo-Figueroa. Methodology of Design in a 2-phase algorithm applied to ITC timetabling problem
Abstract: A Methodology of design is a strategy applied before the execution of an algorithm for timetabling problem. This strategy has recently emerged, and aims to improve the obtained results as well as provide a context-independent layer to different versions of the timetabling problem. In this paper the proposed methodology improves the performance of a 2-phase algorithm. The 2-phase algorithm is a method used to find a feasible solution and then enhance it in order to find a perfect solution. Our 2-phase algorithm uses a GA with no-direct representation and Simulated Annealing with CSP heuristic neighborhood. Finally our benchmark consists of 20 timetabling instances from the problem described by International Timetabling Competition in Practice and Theory in Automated Timetabling, PATAT.
Jaime R. Ruiz, Leopoldo Altamirano, Eduardo F. Morales and Adrián León. Automatic Recognition of Human Activities under Variable Lighting
Abstract: The recognition of activities plays an important role as part of the analysis of human behavior in video sequences. It is desirable that monitoring systems may accomplish their task in conditions different to the training ones. A novel method is proposed for activity recognition on variable lighting. The method starts with an automatic segmentation procedure to locate the person. It takes the advantage of Harris and Harris-Laplace operators of capturing information in spite of extreme changing lighting to locate corners along the human body. Corners are followed through the images to generate a set of trajectories that represent the behavior of the human. The method shows its effectiveness recognizing behaviors by a comparison procedure based on dynamic time warping, and also working well with examples of activities under different lighting.
Andriy Sadovnychyy. Modeling of 2D protein folding using genetic algorithms and distributed computing.
Abstract: In this work is presented an application of parallel programming for  study one scalable problem in the area of bioinformatics (Protein Folding) using a genetic algorithm. The properties of proteins are depended on it configuration in the space. To calculate it´s configuration is a hard problem so it used some approximations models like 2D square lattice. For find an optimal configuration  of the protein on the space (configuration with minimal energy) is used genetic algorithms. They make it possible to find optimal configuration several time faster than a full calculation of interaction between atoms. Disadvantage of genetic algorithms is computation load. Therefore parallel genetic algorithms is applied in this work. Parallel genetic algorithms are realized operations (like mutation, crossover and fitness) in parallel mode. For this the multi-agent system are useful. Because they make it possible to realize many independent function in parallel mode.
Felix Calderon, Juan Jose Flores and Erick Galaad De La Vega. Increasing the Performance of Differential Evolution by Random Number Generation with the Feasibility Region Shape
Abstract: Global optimization based on evolutionary algorithms can be used for many engineering optimization problems and these algorithms have yielded promising results for solving nonlinear, non-differentiable, and multi-modal optimization problems. In general, evolutionary algorithms require a set of random initial values; in the case of constrained optimization problems, the challenge is to generate random values inside the feasible region. Differential evolution (DE) is a simple and efficient evolutionary algorithm for function optimization over continuous spaces. It outperforms search heuristics when tested over both benchmark and real world problems. DE with Penalty Cost Functions is the most used technique to deal with constraints, but the solution is moved by the penalty factor, losing accuracy. Additionally, the probability to reach the optimal value is near zero for some constrained optimization problems, because the optima of the objective function are located out of the feasible region, therefore the optimal solutions of the problem lie at the border of the feasible region. In this paper we propose an improved DE algorithm for linearly constrained optimization problem. This approach changes the restricted problem to a non-restricted one, since all individuals are generated inside the feasible region. The proposed modification to DE increases the accuracy of the results, compared to DE with penalty Functions; this is accomplished by the generation of random numbers whose convex hull is shaped by the feasible region. We tested our approach with several benchmark and real problems, in particular with problems of economic dispatch of electrical energy.
Evaluation of Hydrocephalic Ventricular using Brain Images with Fuzzy Logic and Computer Vision Methods
Abstract: The purpose of this paper is classify cases of Hydrocephalic Ventricular of Human Brain Images using Fuzzy Logic, based on the size of the ventricles of the human brain, using intelligent techniques combined with computer vision, the analysis of the ventricles size was based databases using magnetic resonance cases of  normal ventricles, the criterion based of the fuzzy logic inference system and vision using the height, and volume of the ventricles to classify the hydrocephalic cases, The height, area and volume of the ventricles of the left and right brain were measured in 13 individuals, 10 normal and 3 cases of hydrocephalus. . We expect that the symptoms of hydrocephalus using the proposed method are classified by the size of the ventricles with a significantly higher percentage when considering a large number of cases of hydrocephalus.
Efrén Carbajal and Leonardo Garrido. Ball Chasing Coordination in Robotic Soccer Using a Response Threshold Model with Multiple Stimulus
Abstract: In any system made of several robots, the task allocation is an indispensable component in order to achieve coordination. We present two different approaches commonly found in literature to solve tha dynamic assignment of the ball chasing task, i.e. multi-robot task allocation and division of labour. In particular, we explore both approaches in a 3D simulation robotic soccer domain called Robotstadium. Moreover, we evaluate and compare four controllers representing different coordination implementations belonging to either of the two approaches. We show that by formulating the problem as division labour, together with a response threshold model with multiple stimulus as arbitration mechanism, we provide an efficient algorithm with respect to a proven benchmark solution. We also present evidence indicating that this communication-less algorithm result in an emergent team behavior which indirectly also ad- dress the positioning problem of robots within the soccer field, keeping physical interference low.
Mario Anzures-García, Luz A. Sánchez-Gálvez, Miguel J. Hornos and Patricia Paderewski-Rodríguez. Methontology-based Ontology Representing a Service-based Architectural Model for Collaborative Applications
Abstract: Nowadays, the usage of ontologies to model systems has been extended to several domains, since an ontology facilitates the modeling of complex systems and presents axioms which can be used as rules, policies or constrains to govern the system behavior. This paper presents an ontology to represent a Service-based Architectural Model for Collaborative Applications (SAMCA), such as a Conference Management System (CMS), a simple Social Network, a Chat, or a shared workspace. The development process of this ontology is based on Methontology, which is a well-structured methodology to build ontologies from scratch that includes a set of activities, techniques to carry out each one, and deliverables to be produced after the execution of such activities using its attached techniques. In order to show the development of the ontology using Methontology, the concepts, relations, axioms and instances of this ontology are specified for the collaborative application chosen as a case study, which is a CMS.
Martha Cardenas, Patricia Melin and Laura Cruz. Modular Neural Networks for Pattern Recognition Optimizing using a PGA in CMPs
Abstract: This paper shows the implementation of a Parallel Genetic Algo-rithm (PGA) for the training stage and the optimization of a monolithic and modular neural network for patter recognition. The optimization consists in obtaining the best architecture in layers, and neurons per layer achieving the less training error in a shorter time. The implementation was carried out in a computer with multicore architecture (CMP) using parallel programming techniques to exploit the resources of the CMP. We present the results obtained in terms of performance by comparing the results of the training stage for sequential and parallel implementations.
Christian J. Abrajan, Fabian E. Carrasco, Adolfo Águilar, Georgina Flores, Selene Hernández and Paolo Bucciol. A similitude algorithm through the Web 2.0 to compute the best paths movility in urban environments
Abstract: Abstract In this paper we present a similitude algorithm based on fuzzy relations to support the movement of a user of the Urban Public Transportation System (UPTS) in the Puebla City. The algorithm computes the best paths in order to support and to optimize the user mobility within urban environments based on three QoS metrics: spatial distance, security and number of transfers. The algorithm feedback uses the knowledge gained through the Web 2.0 to allow the user to query and to exchange experiences. This virtual system incorporates: a decision algorithm of the best paths, search algorithms and fuzzy relations algorithms of the UPTS, in order to benefits local and foreign travelers of the Puebla City or cities with similar characteristics.
Dinamic Quadratic Assignment to model Task Assignment Problem to Processors in a 2D mesh
Dusan Teodorovic, Milica Selmic and Ljiljana Mijatovic-Teodorovic. Neural network based model for radioiodine (I-131) dose decision in patients with well differentiated thyroid cancer
Abstract: A Decision Support System based on Artificial Neural Network is developed to suggest I-131 iodine dose in radioactive iodine therapy. The inputs to the system consist of patient’s diagnosis based on histopathologic findings, patient’s age, and TNM classification. The output of the neural network is proposed I-131 iodine dose that should be given to the patient. The training group was composed of 72 patients with well differentiated thyroid cancer. The test group consisted of 20 patients. An artificial neural network was trained using Levenberg-Marquardt back-propagation algorithm. By comparing the results obtained through the model with those resulting from the physician's decision, it has been found that the developed model is highly compatible with reality. The accuracy of the developed neural network has been exceptional. The developed Decision Support System could be used in educational purposes.
Huimin Chai and Baoshu Wang. An Information Fusion Architecture for Situation Assessment of Ground Battlefield
Abstract: The information fusion architecture for situation assessment is designed in the paper, which is divided into three stages: perception, comprehension and projection. The process of force structure classification is given, including: target aggregation region partition, command post recognition and force structure classification. The algorithm of template matching is proposed for the recognition of command post and force structure. Thus, the ground situation assessment is made in terms of concepts that can be computed. Finally, the simulation system of situation assessment is developed, a seaboard defense scenario is simulated and the situation assessment for the seaboard is analyzed to illustrate the functionality of the proposed model.
Ali Vahidian Kamyad, Tahereh Fallah, Elham Shamsara and Omid Shamsara. A new approach for controlling the insulin dose by Fuzzy Logic Controller
Abstract: In this study, the stabilization of density of blood glucose in type 2 diabetics will be probed using fuzzy logic and control optimally, Because medical examinations have shown that blood glucose of diabetic patients is fuzzy and considerably dependent on age, BMI, blood pressure, stress, diet, etc. The adopted control method provides better controllability of blood glucose level in patients in spite of uncertainty in data and patients information for disturbances including meals, age, body weight, physical activities and etc. We used open-loop control to stabilize the blood glucose level in patients with type 2 diabetes in normal value .the stability with Fuzzy Logic Control is shown in simulation of Matlab In compare with insulin administered for patients we can understand that Fuzzy Logic Control (FLC) got an acceptable results. It was also shown that the proposed schemes can perform well in simulation experiments. The results are feasible in medical systems.
Ignacio Acosta-Pineda and Martha Ortiz-Posadas. A New Method for Comparing Somatotypes Using the Logical-Combinatorial Approach
Abstract: This paper proposes a new method for comparing somatotypes using the logical-combinatorial approach of pattern recognition theory, through the mathematical modeling of a function to evaluate the similarity between somatotypes, considering the 10 anthropometric dimensions defined in the Heath-Carter method. This similarity function was applied to a sample of different individual somatotypes and the results were compared with the ones obtained by the two methods most commonly used: the somatotype dispersion distance and the somatotype attitudinal distance. We obtained correct results with the method presented in this work and it offers a new perspective for comparison between somatotypes.
Karina Ruby Perez Daniel, Enrique Escamilla Hernandez, Mariko Nakano Miyatake and Hector Manuel Perez Meana. Unsupervised Learning Objects using Image Retrieval System
Abstract: Since several years ago artificial intelligent systems have become in a big challenge and Learning Objects is one of the most important parts in this task.  Unsupervised learning objects provides the necessary intelligence to develop another ambitious tasks such as elderly care systems, similar images retrieval systems, etc.  Therefore this method must be as unsupervised, fast and accurate as could be possible. This paper presents an unsupervised learning object method based on images retrieved by Internet, in order to get a visual relationship between a word and an image of any object at any time, taking as an input a keyword. For this purpose all images are described using Pyramid of Histogram of Oriented Gradients (PHOG) algorithm and the PHOG vectors obtained must be clustered to get a dataset of learning object category. The clusterings method used were k-means and Chinese Restaurant Process (CRP) to build an efficient and simple learning method.
Ana Lilia Laureano-Cruces, Laura Elena Hernandez-Dominguez and Martha Mora-Torres. Visual Simplified Characters’ Emotion Emulator Implementing OCC Model
Abstract: in this paper, we present a visual emulator of the emotions seen in characters in stories. This system is based on a simplified view of the cognitive structure of emotions proposed by Ortony, Clore and Allan. The goal of this paper is to provide a visual platform that allows us to observe changes in the characters’ different emotions, and the intricate interrelationships between: 1) each character’s emotions, 2) their affective relationships and actions, 3) The events that take place in the development of a plot, and 4) the objects of desire that make up the emotional map of any story. This tool was tested on stories with a contrasting variety of emotional and affective environments: Othello, Twilight, and Harry Potter, behaving sensibly and in keeping with the atmosphere in which the characters were immersed.
Leonardo Filipe Batista Silva De Carvalho, Helio Cavalcante, Roberta Lopes and Fábio Paraguaçu. The Application of the Genetic Algorithm based on Abstract Data Type (GAADT) Model for the Adaptation of Scenarios of MMORPGs
Abstract: The importance of using Artificial Intelligence in video games has grow in response to a need in showing behaviors and other game elements in a closer accuracy to what is seen at the real world. In order to assist this need, this paper takes advantage of the context of MMORPGs (game worlds with rich and interactive environments where important events occur simultaneously) to demonstrate the application of the artificial intelligence technique of the Genetic Algorithm based on Abstract Data Type (GAADT) in changing the features of game maps of MMORPGs due to the passage of time, in an attempt to reproduce what is seen in the real world.
Carlos Milián, Rafael Bello, Carlos Morell and Bernard De Baets. A study about how the training data monotonicity affects the performance of ordinal classifiers
Abstract: Some classification problems are based on decision systems with ordinal-valued attributes. Some times ordinal classification problems arise with monotone datasets. One important characteristic of monotone decision systems is that objects with better condition attribute values can not be classified in a worse class. Nevertheless, noise is often present in real-life data, and these noises could generate partially non-monotone data sets. Several classifiers have been developed to deal with this problem but its performance is affected when faced with real data that are only partially monotone. This paper study two monotonicity measures for datasets and analyze its correlation with several ordinal classifiers performance. Our results allow to a priori estimate the ordinal classifier behavior when faced with partially monotone ordinal decision system.
José Luis Oropeza. Using signal processing based on wavelet analysis to improve automatic speech recognition on a corpus of digit
Abstract: This paper shows results when we used wavelets in a corpus of digits pronounced by five speakers of Spanish language. One of the most important aspects related to the ASR is to reduce the number of data used. Firstly, we show the results when we used wavelet filters to the speech signal in order to obtain low frequencies only. Secondly, we use the ability of wavelet analysis is to perform data compression, to reduce by half the amount of voice data analyzed. For each of the two previous experiments we obtained new corpus, after that each corpus was used to train an Automatic Speech Recognition System using the technique vector quantization (VQ), being more employed in these corpus. Finally, we compare our results with respect to the original corpus and found a 3-5% reduction in Word Error Rate (WER). Daubechies wavelets were used in the experiments, as well as Vector Quantization (VQ) with Linear Prediction Coefficients (LPC) as features to represent the speech signal.
A.Yaroslavi Martínez Campos and Hugo Romero. Estimation of position and orientation for land mobile robots using artificial vision
Abstract: In this paper we propose a computer vision algorithm to estimate the position and orientation of a land mobile robot system, for this task the workspace is equipped with a vision camera, where mobile robots are labeled in the top with different marks for their identification. The vision algorithm is developed in C programming language using the Open Source Computer Vision Library (OpenCV), which is a collection of various high-performance algorithms for image processing and computer vision. The experimental results validate the theoretical aspects.
A.Yaroslavi Martínez Campos and Hugo Romero. Tracking reference of land mobile robots using visual servoing
Abstract: In this paper we present a nonlinear visual servoing control for tracking reference of a multi-robot system based on its kinematic model. The control strategy applied is decentralized and is designed using the Lyapunov´s approach. We show experimental results of tracking a path for a system of three mobile robots.
Luis Alberto Morgado-Ramirez, Sergio Hernandez-Mendez, Luis F. Marin-Urias, Antonio Marin Hernandez and Homero V. Rios-Figueroa. Visual Data Combination for Object Detection and Localization for Autonomous Robot Manipulation Tasks
Abstract: For mobile robot manipulation, autonomous object detection and localization is at the present still an open issue. In this paper is presented a method for detection and localization of simple colored geometric objects like cubes, prisms and cylinders, located over a table. The method proposed uses a passive stereovision system and consists on two mainly steps. The first step, colored object detection, uses the combination of a color segmentation process with an edge detection step, to restrict colored regions. Second step, which consists on pose recovery, merge the colored objects detection mask with the disparity map coming from stereo camera. This step is necessary to avoid noise inherent to the correlation process. Filtered 3D data is used to determine the main plane where are posed the objects and then the footprint is used to localize them in the stereo camera reference frame and the to the world reference frame.
Carlos Alberto Donís Díaz, Rafael Bello Pérez and Eduardo Valencia Morales. Using Linguistic Data Summarization in the study of creep data for the design of new steels
Abstract: A procedure for the design of new creep resistant ferritic steels that involves a large systematic search of combinations of parameters using a neural network model, was proposed in a paper published few years ago. In the present work we study the effectiveness of the LDS technique to be used as a tool to discover a credible and useful creep behavior in a way that it can be used as a guide in the mentioned search. Experiments are performed similar to those discussed in the paper mentioned in order to make an effective comparison of the behavior of the creep. We propose the use of an indicator that measures the degree of representativeness of the linguistic terms for the summarizer in our experiments context. As a result, the effectiveness of the LDS to discover hidden creep behavior stored in creep data and the usefulness of the representativeness indicator was confirmed.
Luis A. Pineda, Ivan V. Meza, Héctor Avilés, Carlos Gershenson, Caleb Rascón, Montserrat Alvarado and Lisset Salinas. IOCA: An Interaction-Oriented Cognitive Architecture
Abstract: In this paper an interaction-oriented cognitive architecture for the specification and construction of situated systems and service robots is presented. The architecture is centered on an interaction model, which is called “dialogue model”, with its corresponding program interpreter or “dialogue manager”. A dialogue model represents the task structure of a specific application, and coordinates interpretations produced by the system’s perceptual devices with the system’s intentional actions. The architecture also supports reactive behavior, which relates context independent input information with the system’s rendering devices directly. The present architecture has been used for the specification and implementation of fixed multimodal applications, and also of service robots with spoken language, vision and motor behavior, in a simple, integrated and modular fashion, where the cognitive architecture’s modules and processes are generic, but each task is represented with a specific dialogue model and its associated knowledge structures.
Yulia Ledeneva, René Arnulfo García-Hernández, Griselda Areli Matias Mendoza and Citlalih Gutierrez Estrada. Comparison of State-of-the-Art Methods and Commercial Tools for Multi-Document Text Summarization
Abstract: The final goal of Automatic Text Summarization (ATS) is to obtain tools that produce the most human-similar summary. Almost all the papers on ATS research area present a review of the state-of-the-art of one side of the issue, since only is reviewed the-state-of-the-art of the tools reported in papers. However, we found a great number of developed commercial tools which are not reported in papers (which is understandable by competitive reasons), but also have not been evaluated. The question is what commercial tools are good in comparison to paper-published tools. This paper gives a survey for 18 commercial tools and state-of-the-art methods for multi-document summarization task testing on a standard collection of documents which contains 59 collections of documents.