Rapid and precise point cloud segmentation is one of the prerequisites for real-time and robust autonomous perception and environmental understanding, which requires a balance between speed and accuracy in architecture design. However, recent lightweight architectures, though fast enough, rely on domain adaptation from time-consuming-constructed synthetic dataset and sophisticated post-processing procedure to improve their performance, neglecting the rich visual information acquired by cameras aside from LiDAR sensors. In this paper, such color information is embedded at data-level to boost the performance of real-time point cloud segmentation. Furthermore, a multiscale lightweight fully convolutional network, VIASeg, is proposed based on the newly designed Super Squeeze Residual module and Semantic Connection from higher convolutional layers to lower layers, which improves the performance by feature denoising with high level semantic information. The superiority of the proposed method is validated and demonstrated in the comparative and ablative experimental analysis, while maintaining the real-time characteristic.