9 November 2018

How does Spotify know your tastes so well

This Monday, just like every Monday before it, over 100 million Spotify users found a fresh new playlist waiting for them called “discover weekly” : It’s an automatically generated playlist that is made only for you. How does Spotify make a personalized playlist for each single user, how does the company knows you so well ?

 As you may have guessed, Artificial Intelligence is highly involved in the subject, but it is not the only tool. Actually, there is three main methods used by Spotify to “profile” each consumer.

I) Collaborative filtering, or profiling by comparison.

What is collaborative filtering? Here is an image that illustrates what is collaborative filtering:




What’s going on here? Each of these individuals has track preferences: the one on the left likes tracks P, Q, R, and S, while the one on the right likes tracks Q, R, S, and T.

Spotify algorithm, by confronting all this data actually creates a “profile” of those two users, here,we can see that the two users have approximately the same musical profile because they share 3 songs. Thus, Spotify suggests to user 1 song T and to user 2 song P. But you may ask : how does Spotify actually use that concept in practice to calculate millions of users’ suggested tracks based on millions of other users’ preferences?

With numpy arrays, which actually represents 2 dimension matrices full of numbers : By comparing those matrices, Spotify can deduce how closer users profiles are, and consequently make personalized suggestions.

II) Natural language processing (NLP)

The second type of recommendation is based on NLP models, basically, NLP is the ability of a computer to understand human speech as it is spoken. Concretely what Spotify do is that it crawls the web constantly, through social media, blogs, newspapers, etc ...  looking for written text about music to figure out what people think about a song or an artist. So actually it scraps all the key word terms found on internet and then build a sort of dictionary composed by the most used terms and computes the probability that someone will describe the music or artist with that term. But how can a machine understand text?
The main word to remember for NLP is Word2Vec. It’s a model developed by google which consist in a two layer neural network that are trained to reconstruct linguistic context of words and puts forward how closer are those words each others. If you want more information about it, you can check out this article here.

III) Audio models

The third recommendation method that Spotify use is Raw Audio Model, it actually consists in analyzing the behavior of the song signal and compare with well known songs. It helps unknown songs to become popular. How does it work? It actually uses a convolutional neural Network or ConvNets, that is commonly used in image recognition and find similarities with other songs in tempo, signal behaviour, sonorities etc …

By Z.Hamza and L.Nicolas

27 October 2018

The Pan-Industrial Revolution: How New Manufacturing Titans Will Transform the World



I found Richard D'Aveni new book (Richard is the author of Hypercompetition, 1994) particularly interesting. Richard presents an economic and strategic analysis of new technological ecosystem built around additive technologies, better known by the public as the 3D printer. In his analysis, he compares traditional manufacturing technologies with these emerging ecosystems constructed around 3D printers. The title of his book contains the word "Revolution" and, indeed, he shows that we are on the verge of a new industrial revolution. A combination of economies of scopes, economies of scale and economies of integration associated with platform make this type of ecosystem particularly competitive. I was particularly surprised by the already numerous adoptions of these technologies by large industrial companies. A must read!

**
J'ai trouvé ce nouvel ouvrage de Richard D'Aveni (auteur de Hypercompetition, 1994) particulièrement intéressant. Richard présente une analyse économique et stratégique d'un écosystème technologique construit autour des technologies additives, plus connues sous le nom d'imprimante 3D. Dans son analyse, il compare les approches traditionnelles de production des biens avec ces écosystèmes émergents autour des imprimantes 3D. Le titre de son ouvrage contient le mot "Révolution" et, en effet, il démontre que nous sommes très probablement au seuil d'une nouvelle révolution industrielle. Une combinaison d'économies de variété, d'échelle et d'intégration associée avec le phénomène de plateforme industrielle rend ce type écosystème particulièrement compétitif. J'ai été particulièrement surpris par les déjà très nombreuses adoptions de ces technologies par de grandes entreprises industrielles. A lire !

19 September 2018

'Robotic Skins' turn everyday objects into robots



When you think of robotics, you likely think of something rigid, heavy, and built for a specific purpose. New "Robotic Skins" technology developed by Yale researchers flips that notion on its head, allowing users to animate the inanimate and turn everyday objects into robots.
Developed in the lab of Rebecca Kramer-Bottiglio, assistant professor of mechanical engineering & materials science, robotic skins enable users to design their own robotic systems. Although the skins are designed with no specific task in mind, Kramer-Bottiglio said, they could be used for everything from search-and-rescue robots to wearable technologies. The results of the team's work are published today in Science Robotics.
The skins are made from elastic sheets embedded with sensors and actuators developed in Kramer-Bottiglio's lab. Placed on a deformable object -- a stuffed animal or a foam tube, for instance -- the skins animate these objects from their surfaces. The makeshift robots can perform different tasks depending on the properties of the soft objects and how the skins are applied.
"We can take the skins and wrap them around one object to perform a task -- locomotion, for example -- and then take them off and put them on a different object to perform a different task, such as grasping and moving an object," she said. "We can then take those same skins off that object and put them on a shirt to make an active wearable device."
Robots are typically built with a single purpose in mind. The robotic skins, however, allow users to create multi-functional robots on the fly. That means they can be used in settings that hadn't even been considered when they were designed, said Kramer-Bottiglio.
Additionally, using more than one skin at a time allows for more complex movements. For instance, Kramer-Bottiglio said, you can layer the skins to get different types of motion. "Now we can get combined modes of actuation -- for example, simultaneous compression and bending."
To demonstrate the robotic skins in action, the researchers created a handful of prototypes. These include foam cylinders that move like an inchworm, a shirt-like wearable device designed to correct poor posture, and a device with a gripper that can grasp and move objects.
Kramer-Bottiglio said she came up with the idea for the devices a few years ago when NASA put out a call for soft robotic systems. The technology was designed in partnership with NASA, and its multifunctional and reusable nature would allow astronauts to accomplish an array of tasks with the same reconfigurable material. The same skins used to make a robotic arm out of a piece of foam could be removed and applied to create a soft Mars rover that can roll over rough terrain. With the robotic skins on board, the Yale scientist said, anything from balloons to balls of crumpled paper could potentially be made into a robot with a purpose.
"One of the main things I considered was the importance of multifunctionality, especially for deep space exploration where the environment is unpredictable," she said. "The question is: How do you prepare for the unknown unknowns?"
For the same line of research, Kramer-Bottiglio was recently awarded a $2 million grant from the National Science Foundation, as part of its Emerging Frontiers in Research and Innovation program.
Next, she said, the lab will work on streamlining the devices and explore the possibility of 3D printing the components.