With the rapid development of indoor robots, simultaneous localization and mapping (SLAM) has become a popular area of research in robotics. SLAM attempts to localize the robot by constructing and updating a navigation map as it moves through an environment. However, current SLAM systems rely heavily on visual sensors such as light detection and ranging (Lidar SLAM) or cameras (V-SLAM). These approaches have drawbacks such as lack of scale factors for monocular cameras, low range accuracy for stereo cameras due to short baselines, and sensitivity to environmental disturbances (e.g. light, dust), as well as high cost and large size for scanning lidar. These limitations of V-SLAM and Lidar-SLAM motivate me to integrate a novel lightweight SLAM approach based on low-cost sensors into the current robotic SLAM framework. The objective of this work is to pioneer a robust 3D indoor SLAM Framework (Infradar-SLAM) based on the deep fusion of low-cost single-chip infrared and radar sensor data that can address the drawbacks of state-of-the-art Lidar-SLAM and V-SLAM.