Ever Wonder How the Shazam Algorithm Works?

Ever Wonder How the Shazam Algorithm Works? - Shazam algorithm Python - How does Shazam make money - Music recognition algor

Last updated 12 month ago

Tech Culture
Software

Ever Wonder How the Shazam Algorithm Works?



Your phone's capacity to discover any song it listens to is natural technological magic. In this newsletter, I'll show you the way one of the most popular apps, Shazam, does it. Now, curiously, the founders of Shazam launched a paper documenting how it works in 2003, and I in my view have been running on an open source implementation of that paper, on a assignment I called abracadabra.

Where the paper would not explain some thing, I will fill inside the gaps with how abracadabra procedures it. I've additionally protected hyperlinks to the corresponding a part of the abracadabra codebase in relevant sections so you can observe alongside in Python in case you pick.

Granted, the kingdom of the art has moved on on the grounds that this paper become published, and Shazam has likely advanced its algorithm because it turned into acquired via Apple in 2018. However, the middle principles of audio identity systems have no longer modified, and the accuracy you can reap the usage of the unique Shazam approach is stunning.

To get the most out of this newsletter, you ought to understand:

  • Frequency and pitch

    Frequency is "how often" something takes place, or the range of cycles a soundwave completes in a second, measured in hertz (Hz). Pitch is the human perception of the frequency of sound, with higher frequencies being heard as better pitches and decrease frequencies as lower pitches.

  • Waves

    Waveforms are like the shapes or patterns that sound makes while you can see it. They show how the air movements from side to side while something makes a noise.

  • Graphs and axes

    Graphs are photos that display statistics the use of lines, dots, or bars. Axes are the two strains on a graph that assist you notice in which the records belongs, with one line typically going facet to aspect (horizontal) and the alternative going up and down (vertical).

video max-width: 100%; .Portrait-video width: 50%; margin-left: 25%; max-peak: 600px;

What is Shazam?

Shazam is an app that could become aware of songs simply with the aid of being attentive to a short sample. When you pay attention a song and surprise, "What's that tune?", you can use Shazam to quick discover its name and artist. The app has proven famous enough – with over 2 hundred million worldwide users each month – that it caught Apple's interest and it turned into obtained in 2018.

You can open Shazam while song is gambling, and the app will record some seconds of audio which it makes use of to search its database. Once it identifies the tune this is playing, it's going to display the end result on display.

Shazam recognising a track

Before Shazam changed into an app, it became a telephone quantity. To pick out a song, you would ring up the range and hold your cellphone's microphone to the music. After 30 seconds, Shazam might cling up and then text you information on the track you have been being attentive to. If you have been the usage of a mobile smartphone returned in 2002, you may apprehend that the best of smartphone calls back then made this a tough project!

Why is tune recognition hard?

If you have not finished plenty signal processing earlier than, it may no longer be obvious why this is a hard problem to resolve. To help provide you with an concept, check the subsequent audio:

The above graph suggests what Chris Cornell's "Like a Stone" seems like whilst saved in a computer. Now check the subsequent section of the song:

If you desired to inform whether or not this section of audio got here from the tune above, you may use a brute-force technique. For instance, you may slide the section of audio alongside the song and notice if it suits at any point: Matching a phase of tune by using sliding it

This could be a chunk slow, but it might work. Now consider that you failed to realize which song this audio came from, and you had a database of 10 million songs to search. This might take plenty longer!

What's worse, whilst you circulate from this toy instance to samples which might be recorded through a microphone you introduce heritage noise, frequency consequences, amplitude adjustments and more. All of these can exchange the shape of the audio significantly. The sliding method simply would not paintings that nicely for this hassle.

Thankfully, Shazam's approach is a lot smarter than that. In the following section, you will see the excessive-level overview of the way this works.

System assessment

If Shazam would not take the sliding method we defined above, what does it do?

Take a study the subsequent high-degree diagram:

The first element you may be aware is that the diagram is split up into sign in and recognize flows. The sign in waft remembers a song to enable it to be identified in the future. The apprehend go with the flow identifies a quick segment of audio.

Registering a tune and identifying some audio percentage a variety of commonality. The following sections will move into more element, but each flows have the subsequent steps:

  • Calculate the spectrogram of the song/audio. This is a graph of frequency against time. We'll talk more approximately spectrograms later.
  • Find peaks in that spectrogram. These constitute the loudest frequencies in the audio and could help us construct a fingerprint.
  • Hash these peaks. In short, this indicates pairing peaks up to make a better fingerprint.

After calculating those hashes, the check in drift will keep them within the database. The recognize go with the flow will compare them to hashes already inside the database to pick out which music is gambling via the matching step. In the following few sections, you may examine extra approximately every of those steps.

Calculating a spectrogram

The first step for each flows is to acquire a spectrogram of the audio being registered or diagnosed. To recognize spectrograms, you first ought to apprehend Fourier transforms.

The Fourier transform

A Fourier rework takes some audio and tells you which ones frequencies are found in that audio. For instance, in case you took a 20 Hertz sine wave and used the Fourier transform on it, you will see a massive spike round 20 Hertz (Hz):

In the above photograph, you may see a large spike round 20Hz and nothing at other frequencies. Sine waves are regularly known as pure tones because of this belongings, considering they simplest contain a unmarried frequency.

The end result of a Fourier rework is known as a frequency spectrum. We say that after you are taking the Fourier transform of a signal, you circulate it from the time area into the frequency area. These are fancy terms for describing whether or not time or frequency is alongside the lowest of a graph. In mathematical terms, the area is extra or much less the X-axis of a graph.

The Y-axis of the frequency spectrum represents the electricity of every frequency factor. If a frequency thing is more potent, then it will likely be greater audible inside the time-domain signal.

If you had been to add a 50Hz sine wave at half the power to that 20Hz sine wave, the resulting frequency spectrum could show a spike at 20Hz and a smaller spike at 50Hz:

As you may see, including a couple of audio waves collectively combines the frequencies present in them. In truth, all audio signals may be reconstructed from waves like this. For greater, check this video at the Fourier remodel.

One great assets of the frequency domain is that it frequently helps us to look things that aren't clear inside the time domain. For instance, if you take the sign with frequencies from before and add noise to it, inside the time area it looks visually very unique. However, inside the frequency area, the 2 spikes are nonetheless very clear:

In the frequency area graph at the proper, you can nonetheless sincerely see the spikes for the principle factor frequencies. It would be harder inside the time domain to see what frequency sine waves went into the signal.

Up till now, our examples have simplest contained one or two frequencies, however what happens if you put a greater complicated signal thru the Fourier rework? Let's test our phase of audio from Like a Stone:

Real audio documents just like the one above contain lots of various frequencies. This is a good component, because it method that the frequencies present can uniquely become aware of songs.

Spectrograms

If you run a Fourier remodel over an entire track, then you will see the electricity of the frequencies present over the entire music (see the abracadabra implementation). However, the frequencies which might be present trade over time. To higher represent the frequencies converting over time, we need to break up the song into small sections earlier than taking the Fourier rework. This is called taking a spectrogram.

Here's a simplified animation of the way spectrograms paintings:

Explanation of the spectrogram manner

In the above animation, you could see that the tune is first break up into small sections. Next, we use the Fourier transform to calculate the frequency spectrum of every of these sections. When you placed these types of frequency spectrums collectively, you get a spectrogram.

To make this concrete, allow's test the spectrogram of Like a Stone:

Even though the spectrogram seems 2-dimensional, it is really a 3D graph with the subsequent axes:

  • Time (X-axis)
  • Frequency (Y-axis)
  • Strength (Z-axis/colour)

The Z-axis is represented via color within the spectrogram above. Bright inexperienced suggests a excessive significance for a specific frequency element and dark blue indicates a low significance.

Looking at the spectrogram above, you can see that the brightest spots (strongest frequencies) nearly solely arise below 5000Hz. This is pretty common with tune, for instance most pianos have a frequency range of 27Hz-4186Hz.

The frequencies found in a song contain quite a few figuring out statistics, and calculating the spectrogram lets in us access to that information. In the next segment, you may find out how we turn all that statistics into a unique fingerprint for the tune.

Fingerprinting

Just as a fingerprint uniquely identifies a person, we can extract a completely unique fingerprint for some audio from its spectrogram.

These audio fingerprints rely upon finding peaks in the spectrogram. These peaks are the loudest frequencies at some time in the music. Because they may be loud, it is probable they may continue to exist when subjected to noise or different distortions.

In the next section, you may examine some more about the motivation at the back of using spectrogram peaks to build fingerprints.

Why is the fingerprint primarily based on spectrogram peaks?

A spectrogram peak is a frequency that is loud sooner or later in an audio signal. You can recognize those on a spectrogram in view that they'll be the brightest factors.

In song, those could constitute the loudest notes. For example, throughout a guitar solo, the notes that the guitar is gambling would possibly end up spectrogram peaks because they could probably be the loudest notes at that point.

A spectrogram height is the point least in all likelihood to be stricken by noise. Noise has to be louder than the spectrogram height to make it unrecognizable and the spectrogram peaks are the loudest frequency components in the song.

To make this visible, check our in advance instance of a Fourier converted signal that had noise delivered to it. When noise is brought, the frequency peaks nevertheless preserve their rough shape.

Another gain of the use of spectrogram peaks to fingerprint audio is they reduce down the quantity of records we should save. Storing only the loudest frequency components method we do not have to save the whole lot else. This quickens looking fingerprints due to the fact there is less records to look through.

Before we can use frequency peaks in our fingerprint though, we need to discover them. In the subsequent segment, you may learn how.

Finding peaks

As mentioned in the preceding phase, the peaks of a spectrogram constitute the strongest frequencies in a sign. For frequency peaks to be usable in an audio fingerprint, it is crucial that they're frivolously spaced thru the spectrogram (see the abracadabra implementation).

It's important the peaks are frivolously spaced in time, so the device can understand any segment of the song. For instance, if all of the peaks were at the begin of the song, then the fingerprint wouldn't cowl later sections:

In the photograph above, all of the peaks (white crosses) are clustered at the start of the tune. This approach that the machine can't recognize any sample from the rest of the track.

It's additionally essential that the peaks are calmly spaced in frequency, so the device can deal with noise and frequency distortion. Sometimes noise will be very loud and focused at a specific frequency variety, as an instance a car horn inside the historical past:

Peaks clustered in a frequency band affected by noise

In the above animation, the peaks are properly-spaced in time, however are clustered right into a small frequency band. When a noisy noise is added, for instance a automobile horn, it could make a whole phase of tune unrecognizable by way of changing which peaks are decided on.

To find spectrogram peaks even as maintaining them well-spaced, we can borrow a way from photograph processing known as a maximum filter. The method looks something just like the following:

  • Use the maximum filter out to highlight peaks in the spectrogram.
  • Locate the highlighted peaks by evaluating to our original spectrogram.
  • (Optional) Discard a few peaks.

Let's run via those steps one-by means of-one. First, let's check how the maximum filter works:

Step 1: Maximum clear out

A most clear out emphasizes the peaks in an photograph. It does this via looking in a community round each pixel for the most price and then setting the pixel to that neighborhood maximum. The following animation shows a maximum filter that looks at a 3x3 community round each pixel:

Animation of a most filter out on a easy photo

As you may see inside the above animation, the maximum clear out takes each pixel of an picture in flip and reveals the most in a area surrounding it. The filtered pixel is then set to that local maximum. This has the effect of expanding each local height to its surrounding vicinity.

Running a maximum clear out on Like a Stone's spectrogram offers the subsequent end result:

The maximum-filtered spectrogram looks like a decrease-resolution model of the original spectrogram. This is because the peaks within the sign have expanded and taken over the opposite pixels. Each box with the equal colour within the filtered image corresponds to a nearby top inside the unique photograph.

The most filter has a parameter that controls the scale of the container to apply when finding the local maxima. When you put this parameter to make a smaller box, you grow to be getting more peaks. Similarly, by putting this parameter larger you get fewer peaks.

Step 2: Recover original peaks

The maximum filter out doesn't do all the work for us. The filter out has emphasized the nearby peaks, however it hasn't found their places. To discover the height locations, we want to discover the points that have identical values within the authentic spectrogram and the filtered spectrogram.

The idea in the back of this trick is that every one the non-height factors inside the spectrogram have been changed by using their neighborhood peaks, so their values have changed. The only points whose values haven't changed are the peaks.

Below is a zoomed in phase of the spectrogram above. The points wherein the values are same inside the filtered and original spectrograms are highlighted:

As you may see inside the photos above, the highlighted factors in which the 2 spectrograms are same correspond to the nearby peaks of that a part of the image.

Plotting all of the peaks collectively produces something known as a constellation map. Here's the constellation map for Like a Stone:

These graphs are called constellation maps in view that they look a piece like an photo of the night sky. Who said computer science could not be romantic?

Step 3: (Optional) Discard peaks

Once we have a constellation map of peaks, the subsequent step is to probably discard a few. The size of our fingerprint is dependent on the number of peaks that we use in it. Keeping fingerprints small topics while you are storing thousands and thousands of songs for your database.

However, by using lowering the wide variety of peaks we use, we lessen the accuracy of our machine. Fewer peaks in a fingerprint mean fewer probabilities to in shape a pattern to an appropriate music.

There are more than one options for reducing the range of peaks in our fingerprint:

  • Take the pinnacle N peaks. N need to be proportional to the length of audio that you are fingerprinting to keep away from over-representing shorter songs.
  • Take all peaks above a positive threshold. This would not guarantee you a positive fingerprint length per time just like the other method, but may also supply extra correct outcomes.

We have almost completed constructing our fingerprint, the next step is to supply a fixed of hashes from our peaks.

Hashing

To motivate hashing, believe that our fingerprint turned into only a series of spectrogram peaks. Each height's frequency would be represented by means of a positive range of bits, as an instance 10. With 10 bits of records, we can represent 2^10=1024 individual frequencies. With heaps of these factors according to tune, we quickly get loads of repeats (see the abracadabra implementation).

Uniqueness is essential for a fingerprint, since it makes searching lots quicker and allows to recognize greater songs. Shazam's option to the problem of strong point is to create hashes from pairs of peaks:

The diagram above shows a zoomed in part of a spectrogram. Each circle represents a top and the dashed line field represents a hash. You can see that a hash is fashioned of peaks. The facts this is recorded for every hash is the frequency of every top, fA and fB, and the time delta among them, ΔT.

The advantage of pairing points up is that two paired points are a lot extra particular than a single point. Looking at it mathematically, if every point has 10 bits of frequency records, and the time delta between the 2 points could be represented by 10 bits, then you have 30 bits of data. 2^30=1073741824 that's notably larger than 1024 possibilities for a single point.

Shazam creates pairs using the subsequent set of rules:

  • Pick a factor. This will be referred to as the anchor factor.
  • Calculate a goal sector of the spectrogram for the anchor factor.
  • For every factor within the target area, create a couple with the anchor factor.

You can see this set of rules illustrated inside the following animation:

Animation of pairing factors

Choosing a goal zone is not described in the Shazam paper, however the photos the paper carries display it as beginning slightly beforehand of time of the anchor factor and being targeted on the anchor point's frequency.

Once a couple has been created, it's far saved as a hash inside the database with the following records:

        Other information Point A freq (fA) Point B freq (fB) Time delta (ΔT) Point A time Track ID

The first 3 columns (fA, fB and ΔT) make up the hash. The "Other records" is used to locate the hash at a specific time in a track. This can be used in matching later.

All of the hashes for a specific music make up the fingerprint. In the subsequent segment, you'll examine about how Shazam fits these fingerprints.

Matching

Given a set of fingerprints in a database, how does Shazam parent out which one a given audio pattern fits? This is in which the matching a part of the system comes in.

Recall the machine diagram from in advance:

The apprehend and sign up flows both generate fingerprints. The difference lies in what they do with them. While the sign in float shops fingerprints away for destiny matching, the understand go with the flow has to healthy its fingerprint with what's already inside the database.

The matching set of rules includes the following steps:

  • Retrieve all hashes from the database that in shape the sample's fingerprint.
  • Group those hashes with the aid of tune.
  • For every track, determine out if the hashes line up.
  • Choose the music with the maximum coated up hashes.

We'll look at each of those steps in turn.

Step 1: Retrieve matching hashes

The first step is to find each hash within the database that suits a hash within the fingerprint we just created (abracadabra implementation). Even although a hash is a three-tuple of (fA, fB, ΔT), abracadabra stores this as hash(fA, fB, ΔT) in which hash() is a hash characteristic that returns a unmarried value.

This manner you most effective have to look for a single value in step with hash in place of 3.

Step 2: Group hashes by music

Recall the layout of an character hash in the database:

        Other data Point A freq (fA) Point B freq (fB) Time delta (ΔT) Point A time Track ID

Thanks to the track ID that we associated with each hash, we can institution the hashes via track. This permits us to score every doubtlessly matching tune.

Step three: Figure out if hashes line up

abracadabra implementation

If a sample matches a track, then the hashes present in that sample should line up well with the hashes in a few phase of that music. The diagram below illustrates what this will appear like:

In the above diagram, a pattern has been coated up with the section of the authentic track that it got here from. The blue factors represent the anchor points of the hashes.

While the above diagram suggests the correct situation, there is a danger that the matching hashes from the database do not line up flawlessly. For example, noise could have delivered peaks inside the pattern that resemble peaks at a one-of-a-kind point inside the tune. This can cause the following situation:

In the above diagram, the pink circles constitute hashes that healthy to points inside the music outdoor the segment the sample got here from. In this example, it's tougher to peer that the pattern is an excellent match for the music.

What's worse, on occasion hashes can fit to the wrong tune! This is wherein checking that the hashes lineup is available in.

To give an explanation for how we are able to take a look at whether or not the hashes line up in code, permit's study an instance. Let's believe that we've got a list of matching hashes from the database and grouped them through music. For a given tune, we can then test the time that the hash happens in the unique song towards the time that the hash takes place within the sample.

Sample time Track time Track time - Sample time three 13 10 4 14 10 7 20 13 five 15 10 6 12 6 1 eleven 10

In the above table, you may see that all the matches with a Track time - Sample time of 10 were highlighted. These are the actual fits, while the other rows are fake fits. To see this is the case, let's take a look at a comparable diagram to the ones we noticed earlier than:

The above diagram includes the equal hashes from the preceding table. As you may see, the actual fits have a Track time - Sample time that is same to how far into the track time that the pattern starts.

To see how we turn this right into a score for the tune, allow's make this information into a histogram. A histogram is a elaborate name for a bar chart. We're going to plot every Track time - Sample time in opposition to the number of times it occurs:

Each bar in the histogram above is called a bin. To score a music on how properly a healthy it's far for an audio sample, we simply want to take the most important bin. Songs that aren't appropriate fits could have low values in all packing containers, while a track it is a good fit could have a huge spike in one of the bins.

This manner we will examine a sample to all of the songs with matching hashes in our database and score every of them. The tune with the highest score is likely to be the right end result.

You may wonder why we don't simply go for the track that fits the biggest range of hashes as it would be a whole lot less complicated to put in force. The hassle with this technique is that not all songs are the identical length. Longer songs are probable to get more suits than shorter songs and whilst a few Spotify tracks are over four hours long this will surely bias your outcomes.

Conclusion

Well carried out for making it this a long way, that become a protracted adventure! Over the route of this newsletter, you have learned how Shazam extracts fingerprints from audio, and how it fits these fingerprints to those that it has already registered in its database.

To summarize, Shazam does the subsequent to sign in a song:

  • Calculates a spectrogram of a song
  • Extracts peaks from that spectrogram
  • Pairs the ones peaks up into hashes
  • Stores the gathering of hashes for a track as a fingerprint

Shazam does the subsequent to apprehend an audio sample:

  • Calculates a fingerprint of the audio sample
  • Finds the hashes that match that fingerprint within the database
  • For every capacity track in shape:
    • Calculate Track time - Sample time for every matching hash
    • Group those values into a histogram
    • Take the largest bin in this histogram as the score for the tune
  • Return the tune with the highest rating

Enter abracadabra

I discovered the entirety written here over the manner of writing abracadabra, my implementation of this paper. If you are interested by seeing what this might appear to be in code, please take a glance!

Everything is open source and I've done my nice to report the mission. Abracadabra can also be used as a library in other initiatives, so please feel loose to remix and construct something cool. If you do use it, I'd love to hear approximately it.

Further reading

If you want to discover extra about whatever cited in this article, take a glance underneath. I've additionally scattered a few helpful links at some point of the page.

  • dejavu is some other implementation of a track recognizer in Python. The author wrote a awesome explanation on the way it works.
  • Computer Vision for Music Identification is another approach to music popularity this is similar to dejavu.
  • An algorithm that takes a barely exceptional method is Chromaprint.
  • Musicbrainz is an open-source encyclopedia of tune facts. This page explains how they fingerprint audio.
  • Playing with Shazam fingerprints is an editorial from 2009 about the writer's enjoy imposing the Shazam algorithm.
  • Alignment of movies of equal event using audio fingerprinting is an instance of a use case for this set of rules that goes beyond track.
.SubDriveRevBot margin: 30px zero 0px; border-radius: 3px; line-top: 1.Five; font-size: 0.9em; coloration: ssharppfff; history-coloration: ssharpp1d4d84; cursor: pointer; heritage-repeat: no-repeat; history-length: comprise; historical past-role: proper; .SubDriveRevBot:hover background-color: ssharpp245996; transition: zero.4s linear; .SubDriveRevBot a coloration: ssharppfff; display: block; width: a hundred%; peak: one hundred%; .SubDriveRevBot a:hover color: ssharppfff; .SubDriveRevBot .Titlerr historical past: rgba(30, 41, 51, zero.Sixty three); padding: 10px 20px 7px; colour: ssharppfff; letter-spacing: -zero.1px; display: block; border-radius: 3px; font-size: zero.9em; .SubDriveRevBot .Remark font-weight: 500; color: ssharppf29f26; font-circle of relatives: Roboto; .SubDriveRevBot .Remarknew font-weight: 500; shade: ssharppfea; font-circle of relatives: Roboto; .SubDriveRevBot .Bulll margin-bottom: 5px !Essential; padding: 15px 5px If you experience our content material, please don't forget subscribing.
  • Ad-unfastened TechSpot enjoy even as helping our paintings
  • Our promise: All reader contributions will move in the direction of investment extra content material
  • That manner: More tech features, more benchmarks and evaluation
  • Shazam algorithm Python

  • How does Shazam make money

  • Music recognition algorithms

  • Shazam API

  • How does Shazam work on iPhone

  • Shazam system design interview

  • Shazam app

  • An Industrial-Strength audio search algorithm

Nvidia RTX 5000 pix cards to provide DisplayPort 2.1 and PCIe five.0 help, be built on TSMC 3nm node

Nvidia RTX 5000 pix cards to provide DisplayPort 2.1 and PCIe five.0 help, be built on TSMC 3nm node

Rumor mill: We might still be extra than a year away from Nvidia's next-gen RTX 5000-collection pix cards hitting the market, however that doesn't mean rumors aren't already arriving. The modern of those claims the card...

Last updated 12 month ago

Top gaming CPU, Ryzen 7 7800X3D is currently $a hundred off

Top gaming CPU, Ryzen 7 7800X3D is currently $a hundred off

Reviewers Liked Low power consumption, first-rate efficiency Impressive gaming overall performance Integrated portraits Existing coolers well matched with Socket AM5 Support for DDR5 reminiscence five & 6 nanometer TSM...

Last updated 11 month ago

Survey: 20% of Japanese students do not know any keyboard shortcuts, 40% unusual with copy

Survey: 20% of Japanese students do not know any keyboard shortcuts, 40% unusual with copy

Facepalm: How nicely do you recognize your shortcut keys? Do you use Ctrl Shift T to reopen a lately closed surfing tab? What about Ctrl Shift Esc to open the undertaking supervisor without delay? Maybe now not,...

Last updated 11 month ago

Microsoft pushes Copilot AI to Windows and Office as soon as subsequent week

Microsoft pushes Copilot AI to Windows and Office as soon as subsequent week

Ever since the first Microsoft AI occasion earlier this year, the agency has been eager to place itself on the very tip of the Generative AI spear. Starting with its efforts around Bing Search via early glimpses of the ...

Last updated 14 month ago

Mobile purchases dominate as Black Friday on-line sales surge to $nine.8 billion, up 7.Five% in comparison to final 12 months

Mobile purchases dominate as Black Friday on-line sales surge to $nine.8 billion, up 7.Five% in comparison to final 12 months

 In a sign that the economy is enhancing and purchasers are much less afraid to invest in non-necessities, a file $9.Eight billion changed into spent on Black Friday, up 7.5% from a yr in advance. More than half that qu...

Last updated 12 month ago

Eddy Cue tells court docket that Apple made search cope with Google as there has been no "valid alternative"

Eddy Cue tells court docket that Apple made search cope with Google as there has been no "valid alternative"

What simply befell? Why did Apple make Google the default search engine on iPhones? According to senior vice president of offerings Eddy Cue, who's testifying in the antitrust trial towards Google, the selection was mad...

Last updated 14 month ago


safirsoft.com© 2023 All rights reserved

HOME | TERMS & CONDITIONS | PRIVACY POLICY | Contact