People tend to convey emotions and show the sentiment either consciously or unconsciously while they speak or write. Many are under the common delusion that both sentiment and emotion are indistinguishable. Sentiment is the mental attitude originating from the feelings whereas emotion is a strong feeling itself. Sentiment analysis is used to identify and extract subjective information about the data. Thus it is also called as Opinion Mining. Emotion analysis gives an idea on people’s psychological responses. With the growth of web 2.0 many social media and marketing companies started investing more resources on this field. This helped them to predict several things from computing customer satisfaction metrics to identifying detractors and promoters companies. At present there are several methods and techniques for sentiment and emotion analysis. In this paper we have referred several researches and methods proposed. Finally we have come to a conclusion that combining regression analysis method of sentiment detection and image processing to detect emotion can yield a productive hybrid model for more precise results.
The idea of statistical convergence of sequences in Hausdorff topological spaces was introduced and studied to some extend by Di Maio and Kocinac in . In this paper we consider the concept of statistical continuity of functions and give a characterization of them by using statistical convergent sequences in first countable Hausdorff topological spaces.
The paper provides a solid foundation in the fundamentals of one and two-dimensional Fourier series and transforms, examines the representation of periodic non-sinusoidal signals which can include current, voltage, voice, image etc, as sum of infinite trigonometrically series in sine and cosine terms, presents analysis of Fourier series with regard to some of its applications and modeling in electric circuits illustrated with corresponding numerical problems, also is being used the processing of images in its frequency domain rather than spatial domain associated with different filters, giving examples with algorithms developed in MATLAB and making visual comparisons for each method.
The main goal of the article is to explore two unusual numeral systems, which alter radically our ideas on the positional numeral systems. We are talking on the numeral systems with irrational bases. The first of them is the binary (0,1) numeral system with the irrational base (the golden ratio), proposed in 1957 by the 12-year American mathematician George Bergman, the second is the ternary mirror-symmetrical numeral system with the base , proposed by the author of the present article and published in 2002 in The Computer Journal (British Computer Society). Bergman’s system is the newest mathematical discovery in number theory and the greatest modern mathematical discovery in the field of positional numeral systems after Babylonian numeral system with the base 60, decimal and binary systems.Bergman’s system can be considered as a new definition of real numbers and is a source of new unusual properties of natural numbers. Bergman’s system generates the ternary mirror-symmetrical numeral system, having unique mathematical property of mirror symmetry, which can be used for effective detection of errors in all arithmetical operations. These numeral systems alter our ideas about positional numeral systems and can affect on future development of mathematics and computer science. The ternary mirror-symmetrical numeral system is possibly the final stage in the long historical development of the concept of ternary numeral systems, because in the ternary mirror-symmetrical numeral system two scientific problems, the sign problem and representation of negative numbers and problem of error detection, based on the principle of mirror symmetry, are solving simultaneously. The famous American mathematician and expert in computer science Donald Knut evaluated highly the ternary mirror-symmetrical numeral system. The author is ready to offer consulting services for any electronic company with advanced technology, which can be interested in the technical implementation of the ternary mirror-symmetrical processors and computers on this basis.
The problem of closed frequent itemset discovery is a fundamental issue of data mining, having applications in numerous domains. Until now, the general technic for incremental mining is using an intermediate structure in order to update the structure whenever there is a variation in the data. As for incremental mining closed itemsets, the intermediate structure used is a concept lattice. The concept lattice promotes the efficiency of the search process, but it is costly to adjust the lattice when there is an addition or removal, as well as it is difficult in developing parallelization strategy. This article proposes incremental algorithms to search all closed itemsets with a new intermediate structure which is a linear list. To the best of our knowledge, this is the first algorithm for incremental mining closed itemsets using a linear list as an intermediate structure proposed so far. When comparing experimental results between using intermediate structure concept lattice and linear list initially show that the greater number of transactions and the number of closed itemsets obtained in the mining process, the more efficient the use of linear list promotes.
Cloud computing applications (CCA) are defined by their elasticity, on-demand provisioning and ability to address, cost-effectively, volatile workloads. These new cloud computing (CC) applications are being increasingly deployed by organizations but without a means of managing their performance proactively. While CCA provide advantages and disadvantages over traditional client-server applications, their unreliable application performance due to the intricacy and the high number of multi connected moving parts of its underlying infrastructure, has become a major challenge for software engineers and system administrators. For example, capturing how the end-users perceive the application performance as they complete their daily tasks has not been addressed satisfactorily. One possible approach for identifying the most relevant performance measures for Root Cause Analysis (RCA) of performance degradation events on CCA, from an end-user perspective, is to leverage the information captured in performance logs, a source of data that is widely available in today’s datacenters, and where detailed records of resource consumption and performance logs is captured from numerous systems, servers and network components used by the CCA. This paper builds on a model proposed for measuring CC application performance and extends it with the addition of the end-user perspective, exploring how it can be used in identifying root causes (RC) for performance degradation events in a large-scale industrial scenario. The experimentation required adjustments to the original proposal in order to determine, with the help of a multivariate statistical technique, the performance of a CCA from the perspective of an end-user. An experiment with a corporate email CCA is also presented and illustrates how the performance model can identify most relevant performance measures and help predict future performance issues.
A recent line of work to improve the secrecy capacity within wiretap gaussian channel has introduced a new lattice invariant called secrecy gain. Belfiore and Sol´e made a conjecture about the point at which the the secrecy gain is maximum. Verified by most unimodular lattices, this conjecture does not hold in general for l-modular lattices (l ≥ 2). Ernvall-Hytönen modified the secrecy function and proved that it satisfies the conjecture for 2-odd modular lattices. In this paper, the authors introduce a new secrecy function for 2-modular lattices. They show that, by using the lattice D4 instead of Dl = Z⊕√lZ , the conjecture holds for both 2-even and odd modular lattices in dimension n ≥ 4. Using that result, they further prove that the modified secrecy function of A.-M. Ernvall-Hytönen holds for both 2-even and odd modular lattices.