As training deep neural networks enough requires a large amount of data, there have been a lot of studies to deal with this problem. Data augmentation techniques are basic solutions to increase training data using existing data. Geometric transformations and color space augmentations are well-known augmentation techniques, but they still require some manual work and can generate limited types of data only. Therefore, there are many interests in generative-model-based augmentation lately, which can learn the distribution of data. This study proposes a set of GAN-based data augmentation methods that can generate good quality training data. The proposed networks, f-DAGAN (data augmentation generative adversarial networks), have been motivated by the DAGAN that learns data distribution from two real data. The basic f-DAGAN uses dual discriminators handling both generated data and generated feature spaces for better learning the given data. The other versions of f-DAGANs have been proposed for generating hard or easy data that have additional dual classifiers for both generated data and feature spaces to control the generator. Hard data is useful for optimized training to increase the target performance such as classification accuracy. Easy data generation can be used especially in few-shot learning. The quality of generated data has been validated in two ways: using t-SNE visualization of generated data and classification accuracy by training with generated data using the MNIST data set. The t-SNE representations show that data generated by f-DAGAN are evenly distributed for every class better than the exiting generative model-based augmentation methods. The f-DAGAN also shows the best classification accuracy by training with generated data. The f-DAGAN version for easy and hard data generation generates data well from five-shot learning and performs well in sample data generation experiments.