The increasing demand for genre-specific creative writing, particularly in children's storytelling, has revealed significant limitations in the controllability and coherence of large language models (LLMs) during story generation. While models such as LLaMA-2 are capable of generating fluent narratives, they often fail to consistently align them with specific genre prompts, reducing their thematic consistency. This paper addresses the critical challenge of enhancing genre fidelity in children's story generation by presenting a genre-conditioned story generation framework that employs a fine-tuned LLaMA-2 7B model, trained on a curated, multi-genre children's story dataset. To evaluate the genre alignment, we utilize a RoBERTabased classifier trained for multi-class genre classification across key genres in children's literature such as Horror, Science Fiction, Humor & Comedy, and Mystery & Detective. Comparative analysis of stories generated by both the pretrained and fine-tuned models demonstrates that the fine-tuned model significantly improves genre controllability, outperforming the pretrained model. By improving genre fidelity, this work enhances the ability of LLMs to generate more thematically consistent and engaging children's stories, supporting the development of controlled, genre-aware LLMs for creative writing applications.