Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Machine Learning

TASFAR: Unsupervised Adaptation of Deep Learning Models to Target Scenarios

TASFAR: Unsupervised Adaptation of Deep Learning Models to Target Scenarios

In this article, we explore a novel approach to generating high-quality annotations, called pseudo-labeling. The process involves creating a probability distribution function over different users, where the coefficients exhibit a positive correlation. This correlation ensures that accurate pseudo-labels are assigned large weights, avoiding low-quality labels that can degrade accuracy.
To calculate global mean density, we initialize SETP and ¯di, which represents the number of times each user appears in the set. We then iterate through all (fθs(xt), ut) pairs in SETC and calculate V ARW and V ARY. If the distance between y and fθs(xt) is less than 3σt, we set S(y) to 1, otherwise it remains at 0. Finally, we update M (j) in SETM based on the number of times each user appears with high accuracy.
The next step is to calculate grid center and end the iteration if y matches fθs(xt). If not, we repeat the process until all users are considered. Once completed, we calculate ˆyt, βt, and save them in SETP. The final step is to calculate pseudo-label error and credibility using Equation 22.
The figure shows how pseudo-label error varies with confidence ratio η. A small η leads to a lower confidence threshold, allowing accurate predictions to be considered uncertain. On the other hand, a too large η may result in few uncertain data, limiting adaptation. We set η to 0.9 for our experiments.
Finally, we validate pseudo-label credibility using Pearson correlation coefficient and summarize the results in Figure 11. Each person’s trajectory data is analyzed, and the correlation between βt and pseudo-label accuracy is calculated. The results show a positive correlation, indicating that high-quality annotations are generated.
In conclusion, pseudo-labeling is a powerful tool for generating high-quality annotations. By using a probability distribution function and carefully calibrating the weights of accurate pseudo-labels, we can improve annotation quality without oversimplifying the process. The results demonstrate the effectiveness of this approach in various applications.