The existing models of machine learning trained on Electroencephalography (EEG) data face a substantial challenge with regards to biases because it is nearly impossible to make fair and general predictions considering the variability of the user’s characteristics or the difference between user session recordings. This work presents a new technique for bias mitigation in EEG models using synthetic data created with Variational Autoencoders (VAEs). The VAE can encode a latent space, which enables us to construct new EEG data that mimics the user and session variability. By repeatedly refining this latent representation during training, we generate synthetic data that improves robustness and generalizability of the model while eliminating biases caused by specific users and temporal changes. We test this idea on a public EEG dataset, where we achieved substantial improvements in cross-user and cross-session performance relative to baseline conventional domain adaptation approaches, as well as, Bayesian augmentation models. The EEG Synthesis Model with Variational Autoencoder (ESM-VAE) reported here leverages real datasets to create synthetic EEG data that significantly improves the classification performance in certain motor imagery tasks. We found an astonishing 13.40% increase in classification accuracy on previously unseen users when the models were trained on synthetic data. These findings emphasize the advantages of combining real and synthetic datasets, indicating that VAEs can capably produce EEG data on a large scale as well as improve the performance of machine learning models.