I like probing the embeddings space
I guess it’s my bias, but I believe the whole mechanism behind ML is the model tries to represent a data set in a mixture of distribution in some high dimensional embedding space
and to use it effectively we need to effectively navigate through that space to find the subspace/manifold that we want
so the work of ML research is 3 folds, how to build the space with those nice properties, how to analyze the space, and how to navigate through it
and any learning paradigm can be generalised this way
for example, classical symbolic AI builds the space deterministically, which means the space is fully defined using symbolic relations
but for neural symbolic learning, the space is the statistical representation of symbolic relations, then we sample from it some relations according to some criteria, and use it to describe the space of another problem
no science in all that, just my high level theory, could be trivial and could be completely wrong