Stock market collects huge amount of data which is uncertain, insufficient or fuzzy in nature. To make predictions for such data is very complicated task and one of the biggest challenges to the AI community. Various traditional and statistical indicators have been proposed for this. However, combination of these tools and techniques requires highly human expertise and so much justification in the area. Stock market behavior is highly suspecible. To increase performance of prediction there is a need of method which can accurately predict stock price and can train multiple records simultaneously. Neural Network is very important tool for stock market prediction. This paper mainly highlights the Neural Network based approach to predict stock market behavior and also helps the stock brokers and investors to invest money in stock market business at the right time.
This paper investigates the problem of -disturbance attenuation for robot manipulator with model uncertainty and input time delay. With the idea of shaping potential energy and the method of pre-feedback, a delayed Hamiltonian system structure is obtained for both full actuated and underactuated uncertain robot manipulator with time delay. Then the energy-based adaptive -disturbance attenuation controller is obtained by applying Lyapunov functional method. There spell out some sufficient conditions to guarantee the rationality and validity of the proposed control law. Simulation of a two-link robot manipulator is presented to illustrate the effectiveness of the achieved results in this paper.
The paper seeks for a good forecast model that can accurately represent the inherent characteristics of Internet traffic and forecast the desired traffic load to satisfy the performance target using Artificial Neural network technology. We developed a computational tool in Visual Basic 6.0 for this purpose, based on a Multi-layer network. By making use of an empirical study to examine the effect of some ANN model design issues such as impact of lag observation and the number of neurons of a two hidden layer network on Internet traffic prediction, the results shows that ANN is a powerful forecast modeling tool that can accurately capture the inherent traffic characteristics and forecast the desired traffic load and that these factors have greater impact on the performance ANN forecaster.
This paper presents the derivation and implementation of a block integrator for the solution of stiff and oscillatory first-order initial value problems of Ordinary Differential Equations (ODEs). The integrator was derived by collocation and interpolation of the combination of power series and exponential function to generate a continuous implicit Linear Multistep Method (LMM). The basic properties of the derived integrator were investigated and the integrator was implemented on some sampled stiff and oscillatory problems. From the results obtained, it is obvious that the block integrator gives better approximation than some existing ones.
Aims: This paper presents a trace-based model in Knowledge Acquisition System for Valuing Knowledge. In Case Based Reasoning (CBR), solving problems is based on the solutions of similar past problems. From the system’s point of view, this might be true, but from the user’s point of view, identical problems may need different solutions. This is due to that (CBR) suffers from the “frame problem”: in some situations, the context information is missing. Moving from the Case-Based Reasoning to Trace-Based Reasoning (TBR) is the solution of this problem. Trace-Based Reasoning is an extension of the Case-Based Reasoning, allowing the context to be included in the reasoning. Study Design: The model includes three related stages in solving problems; the first is context – aware retrieved information stage and the second is tracing the user tasks in order to cover all the needed elements in the environment of the given problem. The third stage is the implicitly processed via a back propagation feature exists in the neuro-fuzzy module. Place and Duration of Study: Evaluation and Analysis of Hospital Disaster Preparedness in Jeddah for six months. Methodology: There are six factors have been utilized in the Adaptive Neural Fuzzy Inference module that covers the second stage (Task Analysis Module) of the proposed system alongside with the back propagation process. The training will be based on gathered surveyed data. The purpose of the training is to adjust the model parameters, particularly the input membership function parameters, and the corresponding output values. Results: After training the model with proper data, a clear target-oriented towards the best usage of knowledge will take a place. The developed six modules for the second stage with different types of input/output membership functions and trained an input array. The modules are compared based on their ability to train with lowest error values. The Gaussian membership function input with either constant or linear pairing output membership function was the best choice for the proposed system to be adopted in its second stage which is Task Analysis Module. Conclusion: This model can be utilized in firms, societies or even in individuals’ life events. The context of knowledge as one of the six factors affecting the knowledge valuation process is the most important factor due to its high changes were more noticeable than others.
In this paper, we determine coefficients bounds for functions in certain subclasses of analytic functions of complex order, which are introduced here by means of the nonhomogeneous Cauchy-Euler differential equation of order m: Our main result contain some corollaries as special cases.
We propose in this work two Automatic Arabic (Indian) digits recognition systems using a real-life dataset of 3000 bank checks. The systems extracts features from training-set images of 7390 isolated digits (0-9). These features are multi-scale in which they capture narrow, intermediate, and large-scale qualities of the image. The gradient features correspond to the narrow scale, the structural features correspond to the intermediate scale, and the concavity features correspond to the large-scale. These features are employed by two different statistical classifiers; Hidden Markov Models (HMM) and Support Vector Machines (SVM). The two independent recognition systems utilize the proficient CENPARMI Arabic bank check database for training and testing. In order to select the optimal parameters for feature extraction and for the HMM classifier, the CENPARMI training dataset is divided into training and verification subsets. After adapting the two systems’ parameters, they are tested on unobserved 3035 digit images. The average recognition rates for the HMM and SVM systems are 97.86% and 99.04%, respectively. The presented systems provides state-of-the-art recognition results on the CENPARMI database, as they reported a higher recognition rates when compared to twelve previously published systems, especially for the SVM system. After analyzing the classification errors, the authors conclude that some of these errors are inevitable as they are most probably attributed to errors in labeling the original database, distinct writing styles of certain digits, and genuine faults.
The paper deals with the effects of radiation and Soret number variation in the presence of heat source/sink on unsteady laminar boundary layer flow of a chemically reacting fluid along a semi-infinite vertical plate, taking the term Eckert number into account. A magnetic field of uniform strength is applied normal to the flow. The governing boundary layer equations are solved numerically, using Crank-Nicholson method and the simulation is carried out by coding in C-Programme. Graphical results for velocity, temperature and concentration fields and tabular values of Skin-friction, Nusselt and Sherwood numbers are presented and discussed at various parametric conditions. From this study, it is found that the Skin–friction, Nusselt number, temperature and velocity of the fluid increase in the presence heat source and for increasing values Eckert number (Ec).
Aims: In this study, we investigated three lottery strategies: random, low and high frequency strategies, usually employed by lottery players. For the three strategies, we considered whether the selection of numbers in Oyo State lottery occurred with equal probability, whether the lottery winning numbers occurred with equal probability, whether the performance of a strategy was associated with the amount of historical information considered and whether a game strategy outperformed others using the game’s history. Place and Duration of Study: The Oyo State Lottery, a type of lottery in Oyo State, Nigeria was used as a case study. Methodology: The data used for this research work consisted of the year 2011 lottery winning numbers of the Daily draw type of game as collected from the Oyo State Lottery Commission. The data was used to simulate the random, low frequency and high frequency game strategies. Various statistical tests which include runs test, Chi-square goodness of fit test, one-way ANOVA test and Least Significant Difference test were carried out to test the different hypotheses defined. Results: For H1 (Each number is equally selected by the public), |Z|=20.98 >Z1-α/2 =1.96. For H2 (The winning numbers occur with equal probability), P>.05 across the months of the year. Considering H3 (There is no performance difference in the strategies with small amount of historical information), P=.06 and for H4(There is no performance difference in the strategies with large amount of historical information), the one-way ANOVA test gave P=.20. For H5 (There is no difference in the performance of the three strategies), ANOVA test yielded P=.013. Further test revealed that low frequency and random strategies; low and high frequency strategies were different from each other at 5% significance level. Conclusion: Players do not select lottery numbers randomly, but rather based on certain strategies. Oyo State lottery winning numbers are selected with equal probability. Thus, we can say that the process that the Oyo State Lottery Commission is using in generating winning numbers is not biased. From the simulation results, the low frequency strategy has the highest number of matches among the three strategies considered. The introduction of small and large amount of historical information component into the ANOVA tests revealed that no strategy is better than others among the three strategies considered. Using full historical information, however, it was discovered that the three strategies were significantly different from one another at 5%. Further tests revealed that random and low frequency, low and high frequency strategies are significantly different from each other at 5% level of significance. Thus, for this study, the low frequency strategy performed better than the other two strategies.