Diabetes retinopathy is an eye disorder, primarily caused by high blood sugar from diabetes. Due to the complication, the back side of the eye (retina) gets infected, leading to permanent visual impairment if left untreated. Yearly around 5-15% of normal people and 30-50% of people with type-2 diabetes lost their vision due to diabetic retinopathy. Currently, 103.12 million adults are suffering from diabetic retinopathy worldwide. It has been regarded as one of the primary reasons for the vision loss of working-class aged people in many countries. Even after the alarming prevalence, diabetic retinopathy can be fully treated if identified at its early stage. Fundus images of the retina are one of the fundamental mediums for clinically detecting and analyzing diabetic retinopathy through image processing techniques. Neural networks like Convolutional neural network (CNN) and Recurrent neural network (RNN) are two of the most used deep learning models to classify and extract information from fundus images though these models have significant limitations such as overfitting and imbalanced dataset classification for limited training images. The study proposed a hybrid autoencoder model called “DBN et” which combines Convolutional Neural Networks (CNN) and Long- Short Term Memory (LSTM) to enhance the accuracy of adaptive retinopathy prediction from fundus images of the retina. 5590 retinal images are processed using the proposed hybrid model and classified into five distinct categories based on their severity level. The Kaggle Diabetic Retinopathy Dataset is used to perform the experiments and accuracy is calculated using the Kappa method. The results yielded 93% accuracy in terms of the proposed DBNet where the conventional CNN and LSTM model achieved an accuracy of 69% and 78% respectively. Such promising results demonstrate the viability of the proposed model in identifying diabetic retinopathy at its early stage and reducing the number of blind cases because of it.