Bayesian neural networks (BNNs) have been proposed to address the problem of model uncertainty in training. By introducing weights associated with conditioned probability distributions, BNN is capable to resolve overfitting issues commonly seen in conventional neural networks. Frequent usage of Gaussian random variables requires a properly optimized Gaussian Random Number Generator (GRNG). The high hardware cost of conventional GRNG makes the hardware realization of BNN challenging. In this paper, a new hardware acceleration architecture for variational inference in BNNs is proposed to facilitate the applicability of BNN in larger-scale applications. In addition, the proposed implementation introduced the RAM based Linear Feedback based GRNG (RLF-GRNG) for effective weight sampling in BNNs. The RAM based Linear Feedback method can effectively utilize RAM resources for parallel Gaussian random number generation while requiring limited and sharable control logic. Implementation on an Altera Cyclone V FPGA suggests that the RLF-GRNG utilizes much less RAM resources compared to other GRNG methods. Experiments results show that the proposed hardware implementation of a BNN can still attain similar accuracy compared to software implementation.