thought

thougths while listening to Open Problems in the Theory of Deep Learning - MITCBMM

I recently listened to this talk where one of the panel made a point of why ML can be bad for science

the argument is that ML gives us a seemingly impossible solution to many problems without the need to understand such solutions

and scientists across various fields are using ML in that way

where they just collect a lot of data and let the magic of ML do it work

yes they need some understanding to design the model and data and training algorithm

but that’s mostly a ML job

and it gives them, for the most part, a black box solution for their problem

and they don’t really understand the solution, just need to know that it works

so they don’t really learn any new thing on the actually subjected in question

so in some sense it hinders the growth of science, and we will be understand less and less

also getting the solution in that way does not feel satisfactory

so yeah, symbolic representation that you were (and still are I suppose) doing actually helps advance science

not related to my research though, just a philosophical idea that’s a good food for thought