I didn't pass LinkedIn's expertise test on Machine Learning (⊙o⊙)         ¯\(°_o)/¯ #MachineLearning #Emojis
We can recall the notions of length angles when we use the Riemannian tensor on a manifold. The Riemannian metric is just section of the two tensors bundle, it associates to every point p a two tensor, i.e., a linear map on the tangent space of at p.

I didn't pass LinkedIn's expertise test on Machine Learning (⊙o⊙) ¯\(°_o)/¯ #MachineLearning #Emojis

With a score of only 30% I failed to pass LinkedIn's expertise test on Machine Learning. I realized some revision was needed; it should raise red flags that I score so low in a field I am supposed to be an expert.

Quoting the paper from Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Velickovic, <<Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges>> [Imperial College London / USI IDSIA / Twitter, New York University, Qualcomm AI Research. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. DeepMind (respectively, in order)]:

Example of parallel transport.
While for many machine learning readers manifolds might appear as somewhat exotic objects, they are in fact very common in various scientific domains. In physics, manifolds play a central role as the model of our Universe—according to Einstein’s General Relativity Theory, gravity arises from the curvature of the space-time, modeled as a pseudo-Riemannian manifold.

For me, manifolds or curved spaces are my day by day on Machine Learning research, pursuing better techniques, instead of just increasing computation power over existing ones. At some point we can't call this any longer science since the basic principle of science is reproducibility of experimental results, which isn't over the table when you need supercomputers in order to reproduce the experimental results from Goggle's AI department; not so many people have the computation power like IBM, Google, etc. have. Although this is an entirely different point.

To whoever is interested to dive deeper on this topic, Jordan Harrod is a content creator in YouTube (aka "youtuber"..., I am 40; and to me a podcast is just a radio show transmitted by the internet... lol), a graduate student at Harvard and MIT researching brain-machine interfaces and machine learning for medicine. In the link bellow she discusses the topic of scientific reproducibility when it comes from machine learning ecosystem. https://youtu.be/ZTUzu1Op7Jg (link to Jordan's video on the topic)

Getting back to my ridiculous score on LinkedIn's Machine Learning expertise test... Professor Yaser S. Abu M., a professor from Caltech, said that in his long career on Machine Learning he has seen brilliant mathematicians, with solid assumptions that hold in order to make the development, and its solid mathematical derivations that get you the result. The problem, in the practice, is that the assumptions made on the function approximation step of machine learning are not realistic. Detaching from the data, from the real applications is a serious issue for mathematicians, quoting again the research team from google et al.:

There is a veritable zoo of neural network architectures for various kinds of data, but few unifying principles. As in times past, this makes it difficult to understand the relations between various methods, inevitably resulting in the reinvention and re-branding of the same concepts in different application domains. For a novice trying to learn the field, absorbing the sheer volume of redundant ideas is a true nightmare.

The issue with Machine Learning is that it has contributions from different fields and there is not a homogenizing guideline for its content. To me, however, is clear that Machine Learning is a field born in Mathematics; the Perceptron Learning Model, or the more advanced neural networks that everyone loves nowadays, are mathematical objects; at the time of their conception there wasn't enough computational power as to make them a scalable reality; but here we are, thanks to the advances in computation we can scale amounts of data that aren't comprehensible for a human brain.

But..., let's face it; computer sciences "only" ¯\(°_o)/¯ are in charge to make the implementation more efficient with algorithms that make it happen (algorithms are a mathematical object as well) and run the code. Thus, they take care of the data and how to apply existing models, tweaking them in different ways but not being innovative at all. Of course, in this list I can't include OpenAI, or Google's DeepMind, yet they have all the computation power of the world, and to implement graphs in neural networks (for protein folding in AlphaFold) isn't, again a conceptual breakthrough but a design choice on the serval components of the learning model. When the Machine learning system needs huge amounts of computation, hence, energy... where is the breakthrough? A human brain can learn while optimizing results in a scale that makes comparison not even applicable. But this world is all about applicability now, and maybe tomorrow; but not really to invest in research that may actually yield no result at all. In example, to use geodesics as error minimizer in curved spaces or to transform it to discrete mathematics using graphs... The more theoretical, the more detached from the data we have over the table, the less likely to have a relevant amount of research on the topic.

Non ducor, duco (*)

Yet, I am a theorist very proud to admit it; looking for learning models that aren't only powerful but energy and computational efficient; in a scale that the simple tweak from algorithmic efficiency can't even dream to reach. Is it realistic? I don't know, but it is the way I decided to go, and "applicability" won't make changes on my research pipelines goals, at the end you must like / love what you do so you can no longer call it work.

I do acknowledge, that I work on theories that can feel extreme detached from the real data we encounter while developing learning models; but that's the fun part in my opinion. I obviously have to take the critic as well and to try to engage with more practical approaches.

Before ending this short consideration, I would like to present a parable; This "beautiful" math problem was posted in twitter:

No alt text provided for this image

Inmediatly appeared two confronted groups of users, those who said that the result was 12, and those who said the result was 18. As a matter of fact, 9 times itself plus one is 90, 8 times itself plus one is 72... until we reach the infamous 3 times itself plus one is 12. However, we can view it as a series; 8 times 9 is 72, 7 times 8 is 56..., until 3 times 6 equal 18; Assuming 3, 6, 7, 8, 9 is a series, each number on the series is x(t), now; x(t)= x * (x(t+1)).Given x(0) =3, x(1) = 6 ,... a discrete monotonic series that take as a value on x(0) =, and x(0+1) = 6. Where the missing value equals 18, since x(0)= 3, and x(0+1) = 6; thus, the answer is 18 (6 times 3). One can argue that this second answer feels unatural, but the way the problem is constructed it conforms a part on one of the two possible answers, remember: the answer is 18 and 12, or the answer is nonexistent, since the problem is not correctly formulated.

What's the correct answer? well, again; the correct answer is 12 and 18, or just assuming that the question is wrong, in this case the last will be the real right one since the question fails to define without ambiguity the problem. I am asking myself how many questions from the expertise test weren't to open or vaguely defined as to be considered poorly formulated, maybe they were indented to create confusion to make one think, yet it created a scenario where the questions were open to interpretation. In a field like machine learning where there isn't an official body of contents, to argue about the answer to a question poorly formulated is totally fashionable. So, we get a number of "twitter users" happy of their performance to the test, whenever one performs highly in the test. This can be taken with a satiric flavor since it pretends to show a problem structural in our society, not just limited to this example.

The YouTuber Hank Green deliberates over this last event, arguing that no mathematician will ever engage in the discussion of a problem that is clearly poorly formulated, thus, you end up with individuals arguing passionately but without real knowledge as to see the problem is in the question. Thus, the situation goes viral precisely because it is intended to be discussed by people without knowledge on the field, but that will defend their view with unreasoable pation. Thus, to create confrontation and discussion, to make people engage and to show them in the meantime advertisements tailored to their metadata. This is as true, as news corporations won't present positive news because they don't trigger the attention and engagement of the user. There is clearly a problem here, because

this situation generates masses of stupidity arguing no fewer stupid discussions in content.

Docendo discimus (**)

Thus, congratulations to whom pass questions poorly formulated; excellence is something we should expect from everyone, including LinkedIn.

(*) “I am not led, I lead.”, (**) "by teaching we learn".

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics