Recurrent neural network has been widely used as auto-regressive model for time series. The most commonly used training method for recurrent neural network is back propagation. However, recurrent neural networks trained with back propagation can get trapped at local minima and saddle points. In these cases, auto-regressive models cannot effectively model time series patterns. In order to address these problems, we propose a hybrid recurrent neural network training algorithm that consists of two phases: exploration and exploitation. Exploration phase uses synchronous particle swarm optimization to search for parameter settings with high activation score and low error. The results of exploration phase are trained with proposed enhanced back propagation, an improved algorithm over traditional back propagation that aggregates temporal errors across timestamps, in exploitation phase. We evaluate our proposed methods using four real-world datasets. Our proposed algorithm, applied to both regularized and adaptive momentum back propagation, increases convergence speed by 10% to 20% and reduces testing mean square error(MSE) at convergence by 5% to 30%. Using particle swarm optimization and activation list in exploration phase, the hybrid training algorithm reduces testing MSEs by more than 30% at convergence compared with traditional back propagation.