Large Language Models (LLMs) such as DeepSeek, ChatGPT, and Google Gemini are transforming the landscape of artificial intelligence and natural language processing through advanced reasoning, multimodal learning, and domain-specific performance. Despite their rapid rise in popularity and broad potential across education, healthcare, finance, and other sectors, few studies offer a comparative evaluation of these models. This work presents a comprehensive technical and functional comparison of DeepSeek, ChatGPT, and Gemini across multiple dimensions, including architecture, training methodology, benchmark performance, and domain generalization. Specifically, DeepSeek employs a Mixture-of-Experts (MoE) architecture for task-specific efficiency; ChatGPT uses a dense transformer model enhanced with reinforcement learning from human feedback (RLHF); and Gemini integrates robust multimodal capabilities across text, code, and images. To enrich our analysis, we conducted a detailed survey of 155 undergraduate students from various academic years (1st–4th), revealing that ChatGPT is the most widely used tool for academic purposes, while Gemini and DeepSeek exhibit unique strengths in specific tasks. Our findings elucidate the technical distinctions and performance trade-offs among the models and offer valuable insights into their real-world adoption and future prospects in AI research and application.