timtam
Meter Reader 1st Class
Posts: 53
Likes: 24
|
Post by timtam on Apr 22, 2023 5:22:50 GMT -5
GITEC has just presented the results of their blinded comparison study of six 59-style PAF clones, by a team with a range of engineering/physics, research, and playing expertise (K. Härtl, W. Hönlein, J. Lody, M. Vochezer, T. Zwicker). It's in German, but using youtube's closed captions, along with German-English auto-translation set, does a pretty good job of making it easily understandable to non-German speakers. The 6 PAF clones tested were 4 expensive sets, one moderately-priced, and one inexpensive: Throbak SLE 101 MXV Haussel 59 Kloppmann HB 59 Set Amber 'Spirit of 59' Seymour Duncan '59 Model (SH1) Roswell LVS The study involved three different elements, conducted throughout 2022. There was an objective measurement stage, which assessed the resonant frequencies from bode plots. Those showed that most of the pickups were very similar, with one exception (Roswell) ... (www.youtube.com/watch?v=5KHBrqxGPuA&t=514s)
(neck pickup is red, bridge pickup is blue)
Even if the vertical scale is blown up you still see little difference between 5 of the sets. The bridge versus neck resonant peaks are typically around 300Hz apart (neck higher). (www.youtube.com/watch?v=5KHBrqxGPuA&t=576s)
There was also a blinded listening test, where an experienced professional player (blinded) played a series of phrases, clean and distorted (the latter have not been analysed so far). This was done over a short period of time using GITEC's 'pickup change' guitar - a tele body with special pickup 'shuttles' that allowed the pickups to be changed quickly. Careful attention was paid to picking in the same position, with similar technique. (www.youtube.com/watch?v=5KHBrqxGPuA&t=706s)
Those audio recordings (clean only) were then assembled into a 6x6 matrix for blinded listening on GITEC's web site (links below), where volunteer anonymous participants were asked to listen to each (blinded) pickup as often as they wished, comparing the same pickup across the different played phrases, as well as different pickups across the same played phrase. They then ranked the 6 pickup sets from most- to least-preferred (6 ratings per participant). The results from 48 participants were analysed, with each top ranking giving a pickup a score of 6, and each bottom ranking giving a score of 1. Thus the highest possible total score for each pickup is 48x6=288 (if it was rated the best by all 48 participants), and the lowest possible score is 48x1=48 (rated lowest by all). A higher score thus means 'more highy rated'. paf.gitec-forum.eu/seite1.htmlpaf.gitec-forum.eu/seite2.htmlpaf.gitec-forum.eu/auswertung1.htmlResults: 1 Seymour Duncan '59 Model (SH1) - 186 points 1 Amber 'Spirit of 59' - 186 points 3 Haussel 59 - 167 points 4 Throbak SLE 101 MXV - 151 points 5 Kloppmann HB 59 Set - 131 points 6 Roswell LVS - 80 points
(www.youtube.com/watch?v=5KHBrqxGPuA&t=1336s) The second cheapest pickup (SD) was rated equal highest, and the two most expensive (by a fairly wide price margin) were rated 4th and 5th. The least expensive (Roswell) was rated last; with its lack of a clear resonant peak it might have fared better had the distorted recordings also been analysed, or had more listeners preferred warmer sounding pickups (eg see next element). The final element of the testing involved two experienced guitarists who were provided with each PAF clone in turn - again blinded - to install in a single chosen/known guitar (one a PRS, the other a Tele Deluxe-style), and then played very often over several weeks. During and at the end of that process, the two guitarists recorded their impressions of each pickup as they got to know it. They ranked the pickup from most- to least-preferred. Then, some months later, the process was repeated - the two players were sent the same pickups one at a time again, blinded and in random order, to again play for several weeks and record their impressions/preferences. The ratings of the two players were then compared to each other, as were each's ratings compared to their own across the two time periods. This more closely mimics common player experiences of trying out new pickups, over time, in their own, known guitars (but unbiased by knowledge of what model of pickup it is). (www.youtube.com/watch?v=5KHBrqxGPuA&t=1344s)
Interestingly, the two players' ratings not only differed from each other, but their own ratings differed between period one and period two, sometimes considerably (although one consistent rating was due to broken blinding). This goes to show that subjective ratings, even under blinded and thorough conditions, are not very reliable over time. Add the fact that most players do actually know if they have paid $500 for a set of PAF clones, or not, as well as the reputation of the pickups. So such assessments become rather worthless, especially for pickups like these - that on the basis of more objective measures are mostly very similar. The Roswells were however rated somewhat more highly than might have been expected (perhaps reflecting individual preferences that include warmer-sounding pickups ?). More comments The very close similarity between the bode plots raises some interesting questions. Is it just rather easy to make a PAF clone with an expected/agreed frequency response, by using a few well-known manufacturing parameters and easily-accessible components ? If not, how else might we explain the similarity in the 5 bode plots ? Might some of these manufacturers have analysed another's earlier product, and copied it ? Or did they just all copy similar original PAFs (but the evidence suggests that not all original PAFs were actually similar*) ? If there was copying, might that that copying have been done to a measured bode plot analysis of the copied pickup, or something much simpler (estimated wind count, wire gauge, magnet type). At least one of these PAF clone manufacturers implies that the only way to build an accurate PAF clone is to slavishly reproduce every single component of original PAFs, down to the type of plate mounting screws, as well as using actual vintage winding machines. The analysis here would seem to blow that notion out of the water. *It is interesting to compare the bode plots of the 6 clones to the real thing - Helmuth Lemme has published bode plots of 3 late-50/early-60s PAFs; and they differ ....
But it is not known how similar the loading conditions for those plots were to the GITEC loading conditions (unspecified ?), which comparing the bode plots would require. I was struck looking at the PAF clone bode plots that the process of pickups would change dramatically if such objective bode plot data - recorded under standardized loading conditions - were readily available to the buyer. As it is for speaker and microphone buyers. Of course some of that pickup data is already available ... www.echoesofmars.com/pickup_data/viewer/Also regarding the buying process, imagine if something like the pickup change guitar was available in big guitar stores, to which any pickup could be easily attached via the screw terminals. So you could easily compare pickups through your preferred rig. Fender already has a similar guitar, but presumably only for esteemed visitors to their California premises ...
(www.youtube.com/watch?v=9B0rXXAFgw4) (www.youtube.com/watch?v=9nxIkiSHN3c)
Regarding the listening test ratings, it would be interesting to do a further series of many blind A-B comparisons between just pickup pairs, to see if the relative ratings are consistent when only one pickup is compared against only one other pickup at a time (an easier assessment to make than trying to rate all 6 pickups at once). Such protocols also re-present the same pairings to the listener again later (blinded of course), to see if their superiority judgements are consistent over time. As it stands, we don't really know how many 'points' in the original GITEC ratings two pickups need to differ by in order for the higher-rated one to be consistently rated as 'better' by say 75% of participants. So for example the 4th rated pickup might be consistently more lowly rated than the 1st two in A-B tests, or it might not. On face value, the bode plots do not provide an explanation for the (small?) differences in the subjective ratings in the study's second and third parts, except for the low ratings of the Roswells in part two. Although one thing that was not totally clear from the presentation was how well pickup height was controlled in the pickup change guitar in part two, or in the two guitarists' setups in part three. But we don't really have a good understanding of how far PAF height needs to change in order to produce a consistent sonic difference that most people would discern.
|
|
|
Post by geo on Apr 22, 2023 9:33:34 GMT -5
Very interesting! Would love to see the 3rd part repeated with a much higher number of participants.
|
|
|
Post by antigua on Apr 22, 2023 13:49:10 GMT -5
With real PAFs, there was no neck or bridge pickup, they were the same, with the exception that the base plate was oriented differently to make the cable wire as short as possible, while still having the screws face away from the center on the top side of the pickup. But despite the fact, you see that PAF clone makers have obvious neck and bridge versions, so the question is when and where did that start?
I was born in '79, so I was a little kid when the aftermarket pickup companies where starting out, so the exact timing of things is unclear to me, but it looks like the SD 59 neck and bridge, the JB/Jazz combo, and the DiMarzio Super Distortion / PAF Pro neck and bridge combos were all on the market around the same time, informing guitarists of the idea that you want a hot bridge pickup and a clear sounding neck pickup. I suspect that most of the PAF clones are basically copies of the SD 59 formula, or that the idea for they call a "balanced set" came from these 80's pickup sets.
From having analyzed pickups that are popular versus ones that are not, it seems to me that familiarity is the most important value. Whatever sounds most like what is heard on the radio will be ranked the best most of the time, and that seems to be true of all guitar products, not just pickups. So a PAF clone bridge around 8k and a neck around 7.5k has become the familiar sound that guitarists want, and while they say they're making perfect PAF clones, in reality it seems they're making SD 59 clones.
I think the Roswell has a brass cover, and the others all have nickel silver. There's no other likely explanation I know of to explain the difference in response curve. I think the idea that the Roswell would be ranked dead last in a listening contest, but ranked higher in real playing might have a parallel to Coke vs Pepsi, where Pepsi tastes better if you're sampling a Dixie cup portion, because it's sweeter, but if you buy a whole case of Pepsi, the sweetness becomes overwhelming and Coke becomes preferable when you're drinking many cans of it. The treble roll off of the Roswell caused by the brass cover might make the guitar easier to mix, or "sit better in the mix", with the flatter EQ profile.
I appreciate that Gitek is doing this, because these subjective experiments are a hundred times more labor intensive than just capturing the bode plots, and they must realize on some level that if the bode plots say the pickups are all the same, the painstaking experiments to reveal bias are just going to reveal that there is a bias, but not much else.
|
|
|
Post by ms on Apr 22, 2023 14:19:45 GMT -5
Well, I was born in '48, before there were PAFs, but I agree with Antigua, PAF clones are really SD clones. You make what sells, and you use a little old fashioned BS to make it appear to be what people think they should want. I wonder about the Roswell: does it have a brass cover, or a brass baseplate?
|
|
|
Post by stratotarts on Apr 22, 2023 21:28:14 GMT -5
I think it's safe to say, it's the cover. The losses are too extreme for the baseplate alone to account for it. They're close to measurements done on other brass cover PAF clones.
A thought - such a response might be perceived differently depending on whether or not there is anything else to compare with. If so, then the drop in treble might trigger some biases, if not, some twiddling of some knobs might find a sound that a musician might find pleasant. In addition to the fact that the subjects were invited to make value judgments, which are obviously going to be influenced by their musical styles.
If I were designing that experiment, I would instead ask them simply to identify them by A-F or whatever. Still some bias, but not begging for it. I didn't follow the entire presentation yet, so I'm not sure. I think it's at least significant that there is correlation between the pickups and the subject ratings, the likely brass covered one that is verified in the plots, was spotted by the participants.
But it also illustrates a problem, when asked for preferences, now from the results you don't know was it, "this pickup is the smoothest/dullest on a scale of 1-5" or was it, "I really like the sound of this one" - which is the question you have been asked. In a way you've done also a survey on preferences, as well as pickups.
I would think, a pickup maker would find it handy to have some data on what people are asking for when they ask for, "a smooth" or "a dull" pickup for example.
|
|
|
Post by antigua on Apr 23, 2023 0:30:43 GMT -5
There was a question on the old Harmony Central product reviews, "if lost or stolen, would you replace it?", that question really cut through the noise, either you would miss the piece of equipment, or you would be indifferent to losing it. I think if you're doing as Gitec is doing, and trying to see if there's any sort of value in spending $600 for a PAF clone versus $100, then they just have to keep the subjective judgement as open ended as possible, and say, rank the pickups in terms of which one you would give away first / which you would most like to keep.
|
|
timtam
Meter Reader 1st Class
Posts: 53
Likes: 24
|
Post by timtam on Apr 23, 2023 2:19:58 GMT -5
There are also research designs specifically to get at consensus rankings. Like pairwise comparison designs, for which there is software available. Where you're presented with just two items at a time - so a recording from each of two pickups only. And you're asked do you like one better than the other ? (or even just are they different ?) Depending on your answers, you're then presented with two different pairs, comparing the one you liked better to another; and then the one you liked less to another. Of course you don't know which ones you're listening to, or if you've heard them before, or how you ranked them before. At the end you've heard and ranked all possible combinations. The software can then create a list of your ranked preferences for all of them. If you ranked two pickups one way when you first heard them, and the opposite when you heard them again, or ranked a pickup last throughout and then better than your until-then-first-ranked pickup, that sort of inconsistency can be flagged.
I suspect that trying to rank 6 pickups at once - where the 48 participants had to pick which (blinded) recordings to listen to (in what was probably a rather random assortment) - is too difficult a task to be very robust. Especially when 5 pickups are probably very similar. We don't have reliability data on the 48 participants' ratings, only the two experienced guitarists' - who weren't very reliable in their *own* judgments over time despite having played each pickup for several weeks.
|
|
|
Post by ms on Apr 23, 2023 14:05:44 GMT -5
I think it's safe to say, it's the cover. The losses are too extreme for the baseplate alone to account for it. They're close to measurements done on other brass cover PAF clones.
A thought - such a response might be perceived differently depending on whether or not there is anything else to compare with. If so, then the drop in treble might trigger some biases, if not, some twiddling of some knobs might find a sound that a musician might find pleasant. In addition to the fact that the subjects were invited to make value judgments, which are obviously going to be influenced by their musical styles.
If I were designing that experiment, I would instead ask them simply to identify them by A-F or whatever. Still some bias, but not begging for it. I didn't follow the entire presentation yet, so I'm not sure. I think it's at least significant that there is correlation between the pickups and the subject ratings, the likely brass covered one that is verified in the plots, was spotted by the participants.
But it also illustrates a problem, when asked for preferences, now from the results you don't know was it, "this pickup is the smoothest/dullest on a scale of 1-5" or was it, "I really like the sound of this one" - which is the question you have been asked. In a way you've done also a survey on preferences, as well as pickups.
I would think, a pickup maker would find it handy to have some data on what people are asking for when they ask for, "a smooth" or "a dull" pickup for example.
You are right, I agree that it is the cover.
|
|
|
Post by ms on Apr 23, 2023 14:29:22 GMT -5
What could possible make the SD and the Amber sound better than the others below it in the second part of this test? Nothing that I can figure out. After thinking about it, it seemed likely that the top results are just random chance. It also appears that people really did not like the Roswell. In fact, I think that that is the only for sure statistically significant result from this test but maybe people really did not like the Kloopman either.
You can do an actual analysis of this experiment, but I would not unless paid. But I have done a quick computer simulation. First you make a routine that puts the integers 1-6 in some random order every time you call it. Then you make a routine that runs that 48 times (for each of the 48 people). Then you sum over the 48 for each of the six. You then run this a lot of times and see how the six numbers range on the average. Here is a short sequence:
In [75]: gitecLTSim.r48() Out[75]: array([161., 157., 174., 171., 179., 166.])
In [76]: gitecLTSim.r48() Out[76]: array([178., 151., 170., 165., 181., 163.])
In [77]: gitecLTSim.r48() Out[77]: array([173., 180., 174., 152., 154., 175.])
In [78]: gitecLTSim.r48() Out[78]: array([175., 156., 153., 174., 162., 188.])
In [79]: gitecLTSim.r48() Out[79]: array([164., 185., 181., 153., 138., 187.])
In [80]: gitecLTSim.r48() Out[80]: array([163., 167., 165., 176., 157., 180.])
In [81]: gitecLTSim.r48() Out[81]: array([149., 178., 177., 159., 193., 152.])
You can do this all day, and I doubt that you will get a number as low as 80. However, it is not unusual to get 186 or higher, especially if you consider the need to redistribute as a result of the low number from the Roswell (that is, raise the others a bit).
So I think at least the top four in this test are not distinguishable, and maybe the fifth as well. And there is no surprise that people do not like a dead pickup (that is, a probable brass cover).
|
|
|
Post by stratotarts on Apr 23, 2023 19:32:39 GMT -5
That is right, the statistical significance is not high, given the limited sample size. Which doesn't mean it's untrue, just not very conclusive. I once worked in a factory which had 10 or so testing stations. The number of false DUT failures (thus no fault of the DUT, passes on re-test) per shift numbered in the 0-3 range. We were expected to take immediate action as soon as even one failure occurred. But, this was physically disruptive to the stations, due to RF sensitivity to component orientation. I argued for a more targeted approach that would track over several shifts and identify stations that needed adjustment. This was vehemently rejected.
So, I wrote a simple spreadsheet program that assigned failures randomly to 10 imaginary stations. The results looked exactly like a typical shift. I took those to my boss and pointed out, each station is absolutely identical and yet it appears that some are faulty. On every subsequent run, the distribution was different, as you would expect. I suggested, after repair work, some stations might actually have degraded performance. This was met with scorn. The sun really doesn't shine in the place where their heads are. Thank heavens I'm not involved with that stuff any more.
|
|
|
Post by Yogi B on Apr 24, 2023 4:46:37 GMT -5
Results: 1 Seymour Duncan '59 Model (SH1) - 186 points 1 Amber 'Spirit of 59' - 186 points 3 Haussel 59 - 167 points 4 Throbak SLE 101 MXV - 151 points 5 Kloppmann HB 59 Set - 131 points 6 Roswell LVS - 80 points Why does the total number of points equal only 901, since 48 × (6+5+4+3+2+1) = 1008?
|
|
timtam
Meter Reader 1st Class
Posts: 53
Likes: 24
|
Post by timtam on Apr 24, 2023 9:11:18 GMT -5
Results: 1 Seymour Duncan '59 Model (SH1) - 186 points 1 Amber 'Spirit of 59' - 186 points 3 Haussel 59 - 167 points 4 Throbak SLE 101 MXV - 151 points 5 Kloppmann HB 59 Set - 131 points 6 Roswell LVS - 80 points Why does the total number of points equal only 901, since 48 × (6+5+4+3+2+1) = 1008?
Not sure. It does appear that the rankings page allows you do some odd things, like enter the same pickup number more than once. Not sure if the code picks that up. Or if someone does not enter a pickup number for all 6 ranks. One has to assume that everyone who participated took the task seriously. The inability to give equal rankings to more than one pickup is also an apparent limitation.
|
|
timtam
Meter Reader 1st Class
Posts: 53
Likes: 24
|
Post by timtam on Apr 24, 2023 9:36:59 GMT -5
Out of curiosity I looked up the (very limited) specs offered by the 6 manufacturers:
Throbak SLE 101 MXV www.throbak.com/paf-pickups-throbak-sle-101-mxv.htmlA2, 42AWG, Neck 7.6K, Bridge 8.1K Haussel 59 haeussel.com/index.php?id=22A5, neck 7.5k, bridge 8.4k Kloppmann HB 59 www.kloppmann-electrics.com/en/hb-59-set.htmlNo specs Amber 'Spirit of 59' www.amberpickups.com/PRODUKTE/Humbucker/A4, neck 7.2k, bridge 8.4k, 42AWG Seymour Duncan '59 Model (SH1) www.seymourduncan.com/single-product/59-modelA5, neck 7.6k, bridge 8.2k Roswell LVS roswellpickups.com/product/lvs-b/roswellpickups.com/product/lvs-n/A2, neck 8.3k, bridge 8.4k
|
|
|
Post by antigua on Apr 24, 2023 10:34:16 GMT -5
The fact that the electrical values aren't controlled for kind of ruins the validity of testing for price point. Not only does the Roswell probably have a brass cover, but it appears to have a higher DC resistance and probably a higher inductance on top of that.
I'm surprised the DC resistance is disclosed for so many of those models, because increasingly they've been hiding all technical specs that are not of a practical nature, such as mounting dimensions. I feel like the best a consumer can do is find posts like this one that suggest pickups aren't really full of mysterious properties as they might have been led to believe, and be more selective as to the portion of their guitar budget is spent on pickups, versus other pieces of guitar gear.
|
|
|
Post by Yogi B on Apr 25, 2023 0:20:34 GMT -5
Why does the total number of points equal only 901, since 48 × (6+5+4+3+2+1) = 1008? Not sure. It does appear that the rankings page allows you do some odd things, like enter the same pickup number more than once. Not sure if the code picks that up. Or if someone does not enter a pickup number for all 6 ranks. One has to assume that everyone who participated took the task seriously. The inability to give equal rankings to more than one pickup is also an apparent limitation. For ranking, a more appropriate UI — i.e. a list that could be reordered by dragging the items or each having shift up/down buttons — should've eliminated the issue of coming up short. And, somewhat related to UI, I would've ideally liked to see the pickup labelling & positioning in the audio matrix randomized per user (say, maybe via their IP address). Note how, other than the obvious exception of PU2 (the Roswell) and that first place was tied, the results have the pickups ranked according to their ordering. Being ranked from 1 st to 6 th (rather than being rated on a scale 1—6, wherein assignment of duplicate points would be allowed) also means that, for each respondent, the points assigned to the pickups are not independent of the other point assignments, thereby making any statistical analysis more complex (at least as far as I know, stats was never my strong point). Insofar as, although the total for any individual pickup should still follow a sextinomial/hexanomial distribution, the combined set of the six totals must follow some other distribution due to their interdependence. ms, without wishing to make work for you, can you share any insight as to what this distribution would be?
|
|
timtam
Meter Reader 1st Class
Posts: 53
Likes: 24
|
Post by timtam on Apr 25, 2023 4:14:41 GMT -5
Manfred Zollner has an interesting take on part 2, the blind listening/subjective ranking website experiment (he was not one of the named investigators, so presumably was not involved). The following video was published about 6 months ago, so before the experiment's results were presented on youtube in the last week (they might of course have been presented elsewhere earlier ?). In any case, his analysis does not refer to the actual results, only the methodology.
Again, the video is in German but youtube's closed captions and auto-translate to English works reasonably well, with some exceptions (eg 'clown' for 'clone'). It does require considerable concentration. He takes a very deep frequency-based 'dive' into each recording from the website. On the basis of his analyses, he argues that while the player was an eminently well-qualified professional player, who made every effort to play repeat riffs consistently, the task was such that the riffs were not played consistently enough across pickups for them to provide truly 'fair' comparisons of those pickups. Zollner identifies numerous significant frequency/amplitude differences that should not be there had the riffs been truly consistent. All this would suggest that having actual guitarists play in such tests, no matter how carefully, makes it almost impossible to exclude inadvertent sample-to-sample variation. If so, robotic picking/strumming devices are perhaps the only objective option for such studies (as long as the device used has been proven to be very consistent); especially in cases such as these where most pickups are expected to sound very similar.
However, Zollner does put forward criteria that could make human-played tests sufficiently consistent, along with other criteria to ensure valid comparisons between pickups. GITEC's protocol included all but his last criterion: - Blinded tests - Short, easy-to-play riffs (not exceeding listeners' short term memory) - Neutral in-room sound (ie not the amplified sound of each pickup, which could easily cause a player to play differently, eg to accentuate treble if it sounds to be missing from a given pickup) - Quick pickup swaps - In addition to the comparison/swapped pickups, use a permanently-installed pickup (bridge piezo), to generate a frequency spectrum from every played riff, so that software can be used to best-match played riffs based on consistency to a designated reference spectrum for that riff. The video does include continuous playing of all 6 riffs across all 6 pickups (from around 7:58). So it's a convenient way to hear the samples back-to-back if you haven't already done so. The actual participants on the website however could select to listen to each individual riff for a given pickup in turn, or one riff for all pickups, or all riffs for one pickup, or any combination thereof (with repeat-listens if they desired).
|
|
|
Post by ms on Apr 26, 2023 17:34:18 GMT -5
Not sure. It does appear that the rankings page allows you do some odd things, like enter the same pickup number more than once. Not sure if the code picks that up. Or if someone does not enter a pickup number for all 6 ranks. One has to assume that everyone who participated took the task seriously. The inability to give equal rankings to more than one pickup is also an apparent limitation. For ranking, a more appropriate UI — i.e. a list that could be reordered by dragging the items or each having shift up/down buttons — should've eliminated the issue of coming up short. And, somewhat related to UI, I would've ideally liked to see the pickup labelling & positioning in the audio matrix randomized per user (say, maybe via their IP address). Note how, other than the obvious exception of PU2 (the Roswell) and that first place was tied, the results have the pickups ranked according to their ordering. Being ranked from 1 st to 6 th (rather than being rated on a scale 1—6, wherein assignment of duplicate points would be allowed) also means that, for each respondent, the points assigned to the pickups are not independent of the other point assignments, thereby making any statistical analysis more complex (at least as far as I know, stats was never my strong point). Insofar as, although the total for any individual pickup should still follow a sextinomial/hexanomial distribution, the combined set of the six totals must follow some other distribution due to their interdependence. ms, without wishing to make work for you, can you share any insight as to what this distribution would be? OK, let's first look at the distribution of the outcome for one pickup. The sextinomial distribution tends towards something much simpler because you are adding 48 numbers together. You can guess what the simpler thing is. (By the way the very simple computer code is shown at the end of this post.) A simulated probability density function (N = 10 million is used for very smooth results) is shown in the figure. A Gaussian (so called normal distribution) is plotted underneath it, but the approximation is so good that it is only very slightly visible. (The mean and standard deviation are listed in the title of the plot.) The probability of an outcome of 80 or less is very small, but is not accurately given since all the outcomes in the range 0 to 80 are 0. This would be found by summing from 0 to 80 if the numbers in that range were good. In a similar way the probability of 185 or greater is about 0.08; we have accurate numbers in much of this range. The probability that one or more of six will equal or exceed 185 is greater, although the dependencies between the six numbers make that probability harder to determine. # For checking to see if the gitec listening test results # can be gotten by chance. import numpy as np # Routine to produce integers 1 through 6 in random order def ranOr(): x = np.random.rand(6) y = np.argsort(x) return y + 1 # run ranOr 48 times and store the results, then sum the six columms # and return def r48(): z = np.zeros((48, 6)) for i in range(48): z = ranOr() s = np.zeros(6) for i in range(6): s = np.sum(z[:,i]) return s
# Sample density function def msdf(N): sdf = np.zeros(288 + 1) for i in range(N): y = r48() sdf[ int(y[0]) ] += 1. return sdf/N
|
|