We use computational simulations to analyse the behavior of the recently proposed Bidirectional Activation-based Learning algorithm (BAL) which was inspired by the Generalized Recirculation algorithm (GeneRec). Both algorithms avoid biologically implausible backpropagation of the error signal, and instead use propagation of neuron activations, which drive the weight updates, using only local variables. We take a closer look at the 4-2-4 autoencoder task for which, despite the task simplicity, reliable convergence could not be achieved by either of the two models. We propose the learning mode with two, significantly different, learning rates (BAL2) that leads to considerably more successful task learning. We also analyze various factors, related to hidden activations, that contribute to further increase of the learning success. In addition, we test BAL2 also on the large scale database of handwritten digits, in which it yields relatively good performance.