Deep Hyperspectral Imager
Objective
To create an imaging device that augments RGB cameras with additional non-imaging modalities such as RF, acoustic, depth and thermal. Each of these modalities provide different information about the scene and the project aims to create a platform to fuse these multimodal images and train the platform for object detection.
Specific goals:
- Map a source of sound in 3D space by making use of beamforming micro phone arrays
- Obtain RF, thermal and depth images of the scene
- Calibrate the imaging sensors in a way that they cover a specific field of view in the scene.
- Integrate the imaging sensors into a single platform
- Train the hyperspectral imaging platform for object detection
Contents
- Components
- Getting Started with NVIDIA Jetson Tx2
- Implementation: Individual sensor calibration and implementation
- Sensor Output
- Multi Modal Images
- Timeline
- References
Project Members
- Aman Srivastava
- Shoban Narayan Ramesh
- Shreya Ramaprasad
Project Mentor
Prof. Mani Srivastava
Contributions and Efforts
- Sound Localisation: Shreya Ramaprasad
- Modality overlay algorithm, frame size and Calibration : Aman Srivastava, Shoban Narayan Ramesh and Shreya Ramaprasad
- Intel Realsense Camera and Jetson TX2 setup : Shoban Narayan Ramesh
-
FLiR Thermal camera and Kinect configuration on Pi: Aman Srivastava
- A special thanks to Amr(student of Prof Mani Srivastava) for guiding us throughout this project.