This page will list all of my favorite artists. There is a maximum of 8 albums per page. The user can also filter the artists based on artist’s genres and scroll through the different pages.
The user can also click on the profile picture of an artist which will redirect them to the artist’s page.
Let’s take Kaytranada as an example.
The page initially contains 2 sections
When clicking on the “Get Recommendations” Button it will call an LLM (Gemma3 using Ollama right now) and ask this prompt :
Using the following artist information (Bio and Tags) and similar artists, describe the musical style of {artist}, explain how they incorporate each genre, and recommend 5 similar artists with a short explanation for each.
The artist bio and tags and similar artists are being fetched from the LastFM API in real time to have the most up to date data.
Musical style : Kaytranada’s musical style is a complex and highly influential blend of electronic music, primarily house and hip-hop, infused with experimental textures and a distinctive rhythmic sensibility. He expertly layers soulful vocals, chopped-up samples, and intricate beats to create a sound that is both danceable and deeply introspective. His production is characterized by its meticulous detail, vibrant sound design, and a strong emphasis on groove.
Genres :
Similar Artists :
For now I don’t allow simply adding an artist. Artists are only being added when adding an album.
I use a combination of Pydantic models, LangChain and Ollama (running locally) to handle prompts to the LLM.
Lets breakdown a little bit more in detail I ask and handle the interaction with the LLM.
I have an Ollama_Client class that contains a function to generate_artists_recommendations :
You can see at the line 2, that I declare a parser which is in direct link with my Pydantic Model.
The role of the model is to structure the response from the LLM. Here is what I use to structure the response I receive from the LLM. You can see it as instructions.
Using this Pydantic Model, I can make sure that the response from the LLM is in the right JSON format that I then show on the frontend of the app.