The Artificial Intelligence and Machine Learning Workbench is an advanced laboratory solution designed to provide hands-on experience in AI, ML, computer vision, robotics, and edge computing. It integrates high-performance computing systems, embedded AI platforms, FPGA-based acceleration, and industrial video analytics to build future-ready skills in real-world applications.
From predictive analytics to natural language processing,and from computer vision to autonomous systems, the potential applications ofAI and ML are boundless. However, harnessing this potential necessitates notonly theoretical understanding but also practical experimentation andapplication.
Hence, we propose this laboratory that will give studentsand industrial infrastructure to nurture their skill sets that are highlyrelevant to the current and future job market.
AI is best learned through hands-on experience andexperimentation. A well-equipped AI lab provides students with access tocutting-edge hardware and software tools, allowing them to gain practicalexperience in developing and implementing AI algorithms and systems.
Opportunities in Research: The proposed laboratory facilitywill enable faculty and students to conduct research in various subfields ofAI, such as machine learning, natural language processing, computer vision, androbotics.
Real-time object detection on a live camera using the YOLOv8n model.
Real-time face detection on a live camera using the Mediapipe library.
Real-time eye detection on a live camera using the Mediapipe library.
Real-Time Hand Tracking on Live Camera using the Mediapipe library.
Real-Time Object Tracking based on Colour on Live Camera with the HSV scheme.
Real-Time Multiple Object Tracking based on Colour on Live Camera with the HSV scheme.
Real-Time Object Contours detection based on Colour on Live Camera with the HSV scheme.
Real-time face recognition on a live camera using the Face Recognition library.
Real-time pose detection on a live camera using the Mediapipe library.
Real-time hand detection on a live camera using the Mediapipe library.
Distinguishing between the right and left hands on a live camera using the Mediapipe library.
Real-time gesture detection on a live camera using the Mediapipe library.
Storing the trained gestures in a pickle file to be uploaded directly on the live camera using the Mediapipe library.
Real-time object detection based on user prompts on live camera.
Implementation of text generation with Llama model using LLMs.
Implementation of Text + Vision with Llava model using the CLIP vision encoder.
Implementation of Video Analytics on VLSI Platform
Robotics: Simulated ROS Projects provided with certain common experiments that contains node programming and compilation steps which includes publish, service, client and subscribe nodes
