Fourier Analysis of Noise - Tania Karan

From WeKey
Revision as of 15:02, 31 August 2022 by WikiSysop (talk | contribs) (Created page with "PHYS 210 PROJECTS --> here Hello all So when I was thinking of what to do with my project, I came upon an idea about music, because after all if you spend a vast amount...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

PHYS 210 PROJECTS --> here

Hello all

So when I was thinking of what to do with my project, I came upon an idea about music, because after all if you spend a vast amount of time researching a concept it should be something you're interested in...How lucky that music has a great deal to do with physics!

I did some research, and very concrete things such as "Fourier Approximations and Music" came up, but what grabbed my attention was someone's attempt to explain scientifically what makes music pleasing to the ear. If I was to take up this "experiment", I could take into account loudness, pitch, timbre etc.... and therefore graphs of pressure, harmonics and such would be required. From what I gathered, this researcher analyzed clips of music in matlab, so I would have the resources as well.

I guess I just like the idea of scientifically comparing pleasant and harsh sounds (heavy metal?) to see the difference. I realize people will tell me that the type of people music like is strictly subjective...but there are songs which a vast majority of people like or dislike.

I await your feedback, and welcome any other suggestions of projects to do with music!

This is a great research topic, and one which probably has a considerable accumulation of literature; my only concern is whether you can get far enough into it to find where computational methods could be of help. A simple tool to predict who will find a given piece of music pleasant might be nice, but its implementation is probably nontrivial. See what you can find out. -- Jess 16:17, 21 September 2008 (PDT)

If analyzing entire passages of music becomes overbearing, perhaps looking at chord structures, scale patterns, and keys, and analyzing why some are more pleasant to listen to than others might be easier to tackle. The way certain frequencies constructively and destructively interfere to create what is pleasant sounding. Musical tastes may be subjective, but harmony and dissonance are not as much. S48571087 23:02, 30 September 2008 (PDT)

I have played around with maple, and was happy to find it has a wealth of monophonic sound analysis tools. So graphing the sound clips should not be too hard. Now my only problem is the math I am going to use to fit the data to enable me to find a frequency.

One problem that arises, is that the things in our environment that we refer to as noise often comprise many different frequencies. When I did some reading, I came upon the "pitch detection algorithm" which can be used to find the fundamental frequency of a periodic signal. The problem is that this algorithm is used usually for speech, or musical notes rather than noises such as "a dog's bark" or other noises I plan to investigate.

Jason suggested I look at wavelet transforms, which would be more "fitting" for my data-analysis. My only issue with this is I fear I don't know the level of difficulty in fitting data in such a manner, but I have never even heard of "wavelet transforms" and though I am willing to try, I realize there is no way I can master the concept in time for the proposal....

You don't need to know how you're going to do it for the proposal presentation, just what you aim to do. -- Jess 19:21, 5 October 2008 (PDT)

I have read that the "discrete wavelet transform is also less computationally complex, taking <math>O(N)</math> time as compared to <math>O(N \log N)</math> for the fast Fourier transform", so perhaps my idea of difficulty levels is incorrect.

As dumb of a question this probably is, it is said that wavelets are localized in both time and frequency, while the fourier transform is only localized in frequency. Now I know that I was told to use wavelets because they apply to my data analysis better, however I don't understand why is it that time and frequency "localization" is better?

I have never used wavelets myself; my understanding is that they involve Fourier transforms over a finite time interval, presumably using a "packet" envelope like a Gaussian? This might be handy to avoid "ringing" due to sharp cutoffs; but for starters just do the FFT on the whole clip and see what you get. In general "noise is bad" so try to avoid it; if you are looking for signatures of various sounds, maybe you can get away with "just hitting the high points" (i.e. ignoring everything in the FFT less than some small multiple of the mean noise amplitude), but this always degrades the subtlety. -- Jess 19:21, 5 October 2008 (PDT)

New problem: Noises don't have a fundamental frequency do they? If I cannot find a set parameter, I cannot simply compare+contrast ALL the frequencies of a given noise to another. What might I set as a parameter to look at?

I have discussed the matter with Raymond, and he said perhaps my sound analysis could be about musical notes and their relationship. Because certain notes (which have specific frequencies) sound good together, while others played together clash, and do not sound good, perhaps I could investigate the relationship of frequency in euphonic and cacophonic sounds.

I like your original idea better. It may be a little ambitious, but it is always more fun working on your own ideas than reporting on established theory; and I believe the relationship of frequency in euphonic and cacophonic sounds is standard music theory, not so much physics. I'm not too sure what it is that worries you about noise. There is a subtle and interesting difference between "random sounds" and "noise"; you should read up on the latter (white noise, <math>1/f</math> noise, etc.). -- Jess 14:59, 6 October 2008 (PDT)