The growing adoption of smart city technologies has amplified the complexity and scale of cybersecurity threats targeting urban infrastructure. Traditional intrusion detection methods often fail to adapt to the heterogeneous and evolving nature of these environments. In this paper, we propose an adaptive Multi-Agent Reinforcement Learning (MARL) framework for intrusion mitigation that dynamically aligns with the digital maturity of smart cities. Our model integrates session-level intrusion detection features with structured smart city indices to enable city-aware policy learning. Each agent is specialized to monitor a specific domain, such as network behavior or user activity, and collaborates through a shared environment guided by deep recurrent Q-networks. To evaluate the effectiveness of the proposed framework, we combine two real-world datasets: a cybersecurity intrusion detection dataset and a smart city readiness index. Extensive experiments demonstrate that the MARL approach outperforms both traditional machine learning and deep learning baselines across accuracy, precision, recall, F1-score, and AUC-ROC metrics. Additional evaluations show the model's robustness to noise, adaptability across city tiers, and computational scalability. These results highlight the potential of context-aware, multi-agent learning systems for enhancing cyber resilience in smart city deployments.