In this paper, we propose a new method for integrating data from multiple sensors that only partially observe the system. Our approach is based on deep learning and leverages techniques from manifold learning and dimensionality reduction to systematically integrate multiple representations into one coherent picture. We focus on two specific cases: "patch case" where each patch is associated with a specific sensor, and "point case" where each point in the system is associated with a single sensor.
To address the challenge of integrating partial observations, we propose a neural network-based solution that consists of encoder-decoder pairs for each patch or point. These encoders map their respective inputs into a common latent space, allowing us to register and combine the embeddings of all encoders. This approach enables us to systematically integrate multiple representations into one coherent picture, even when each sensor only observes a part of the system behavior.
Our proposed method extends previous work that fused complete sensor information, which is not applicable in real-world scenarios where sensors are often partial and incomplete. By leveraging deep learning techniques, we can efficiently handle complex data structures and learn an informative reparameterization of observations from different sensors.
In summary, our paper presents a novel approach to systematic sensor integration that can handle partial observations from multiple sensors. Our method combines techniques from manifold learning and dimensionality reduction with deep learning to provide a robust and efficient solution for integrating complex data structures. By leveraging these techniques, we can create a coherent picture of the system behavior, even when individual sensors only observe a part of it.
Computer Science, Machine Learning