Open Access Original Research Article

Super Resolution Image Reconstruction by Granular Computing with L1-norm

Hongbing Liu, Chang-An Wu

Journal of Advances in Mathematics and Computer Science, Page 1-11
DOI: 10.9734/BJMCS/2015/19721

According to the higher computational complexity during the training process of sparse representation, the centers of granular computing (GrC) with L1-norm are regarded as the bases of sparse representation and used to reconstruct the super-resolution image of input image. Firstly, the granule is represented as the shape of hyperdiamond by the L1-norm in N-dimensional space. Secondly, the join operation between two hyperdiamond granules is designed to transform the microcosmic world into the macroscopic world. Thirdly, the threshold r of granularity is used to control the join process. The centers of granules are regarded as the approximate bases to reconstruct the super-resolution (SR) image of the low-resolution (LR) image. Experimental results show that the SR image reconstruction by GrC with L1-norm reduced the root mean square error (RMSE) between the SR image and the original image compared with the bicubic interpolation and sparse representation.

Open Access Original Research Article

On the DAG Decomposition

Yangjun Chen, Yibin Chen

Journal of Advances in Mathematics and Computer Science, Page 1-27
DOI: 10.9734/BJMCS/2015/19380

In this paper, we propose an efficient algorithm to decompose a directed acyclic graph G into a minimized set of node-disjoint chains, which cover all the nodes of G. For any two nodes u and v on a chain, if u is above v then there is a path from u to v in G. The best algorithm for this problem up to now needs O(n3) time, where n is the number of the nodes of G. Our algorithm, however, needs only O(k.n2) time, where k isG’s width, defined to be the size of a largest node subset U of G such that for every pair of nodes xyU, there does not exist a path from x to y or from y to x. More importantly, by the existing algorithm, O(n2) extra space (besides the space for G itself) is required to maintain the transitive closure of G to do the task while ours needs only O(k.n) extra space.

Open Access Original Research Article

Comparative Analysis of Electroencephalogram-Based Classification of User Responses to Statically vs. Dynamically Presented Visual Stimuli

Lin Hou Chew, Jason Teo, James Mountstephens

Journal of Advances in Mathematics and Computer Science, Page 1-13
DOI: 10.9734/BJMCS/2015/19540

Emotion is an important part of human and it plays important role in human communication. Nowadays, as the use of machine getting more common, the human computer interaction (HCI) has become important. The understanding of user could bring across a better aiding machine. The exploration of using EEG in understanding human is widely studied for benefit in several fields such as neuromarketing and HCI. In this study, we compare the use of 2 different stimuli (3D shapes with motion vs. 2D emotional images that are static) in attempting to classify positive versus negative feelings. A medical-grade 9-electrode Advance Brain Monitoring (ABM) B-alert X10 is used as the brain-computer interface (BCI) acquisition device to obtain the EEG signals. 4 subjects are involved in recording brain signals during viewing 2 types of stimuli. Feature extraction is then applied to the acquired EEG signals to obtain the alpha, beta, gamma, theta and delta rhythms as features using time frequency analysis. Support vector machine (SVM) and K-nearest neighbors (KNN) classifiers are used to train and classify positive and negative feelings for both stimuli using different channels and rhythms. The average accuracy of 3D motion shapes are better than the average accuracy of the 2D static emotional images for both SVM and KNN with 69.88% and 56.35% using SVM for 3D motion shapes and emotional images respectively, and also 65.31% and 55.45% using  KNN for 3D motion shapes and emotional images respectively. This study shows that the parietal lobe are more informative in the classification of 3D motion shapes while the Fz channel of the frontal lobe is more informative in classification of 2D static emotional images.

Open Access Original Research Article

Open Access Original Research Article

On Multiple Integral Chebyshev Wavelets Collocation Method (MICWCM) for Solving Linear and Non-linear Second-order Differential Equations

O. A. Adewumi, M. O. Oke, R. A. Raji

Journal of Advances in Mathematics and Computer Science, Page 1-8
DOI: 10.9734/BJMCS/2015/19557

This paper presents a new and reliable algorithm for solving linear and non-linear second-order differential equations. The new algorithm is called Multiple Integral Chebyshev Wavelets Collocation Method (MICWCM). The algorithm improves some of the earlier results obtained by some researchers except in the case of mixed boundary conditions.  The method posed to be very accurate, reliable and efficient in handling linear and non-linear initial and boundary value problems.  Numerical results obtained by the new method are in agreement with exact solutions available in the literature.

Open Access Original Research Article

Anti-fuzzy BRK-Ideal of BRK-Algebra

Osama Rashad El-Gendy

Journal of Advances in Mathematics and Computer Science, Page 1-9
DOI: 10.9734/BJMCS/2015/19309

In this paper the notion of Anti Fuzzy BRK-ideal of BRK-algebra is introduced. Several theorems are stated and proved. The epimorphic image and the into homomorphic inverse image of an anti fuzzy BRK-ideal is studied well. The Cartesian product of anti fuzzy BRK-ideal is introduced and studied.

Open Access Original Research Article

Free Assets and Their Relations with Riskless Assets

Reza Keykhaei, Mohammad Taghi Jahandideh

Journal of Advances in Mathematics and Computer Science, Page 1-15
DOI: 10.9734/BJMCS/2015/19469

Tobin's one-fund theorem states that, when a portfolio is consisting of some risky assets and a riskless asset (with return rc), then every ecient portfolio in the Mean-Variance optimization is a combination of the tangency portfolio and the riskless asset. We introduce the notion of free asset, which is an uncorrelated risky asset, and convert the problem for determining the tangency portfolio to a problem with lower complexity, which requires smaller portfolio, by excluding free assets with mean return rc from initial portfolio. We show that a set of free assets, with the same mean return, can be replaced by one particular free asset with the mean return to obtain the same results. We also show that free assets (or a set of free assets) with mean return rc and the riskless asset have a close connection and under special conditions, they almost have the same role in Mean-Variance portfolio selection problems.