I'm really glad to hear somebody else say that, I think this same kind of thing all the time.
Yes, we all understand that the bigger the sample size, the more accurate the data. But, are we suggesting that there is no value in analyzing the data before the sample size gets large? surely not...so there is a sample size at which analysis becomes useful and then a sample size at which analysis becomes increasingly accurate. Many newbies (myself included) are basically just trying to get an idea of how we're doing on average so we can "improve as we go" and frankly, I get sick of reading posts about how you need thousands and thousands of tourneys before you know if you're a good player.
I'm pretty sure Doyle Brunson wrote supersystem long before he had thousands of tourneys under his belt. I'm pretty sure Eric Seidel and Phil Hellmuth were recognized as world class players long before they had 33,000 tournaments on file.
my gut feeling is that analysis can tentatively begin when you have 100 tournaments of the same type to analyze; as long as you know that the complete accuracy is in question for a while...
anyways...I am not a statistician so I really don't know what I'm talking about, but in my line of work we send out surveys to our patients and each month we get reports on how those surveys came back. Every month we are told that if your number of surveys returned is under 30, your results are not considered statistically significant. Usually, I only get between 30-50 surveys returned per month and my bonus is based on this, so it is good enough for my employer to decide how valuable of an employee I am. According to most suggested poker standards, I would find out if I am good at my job somewhere near the end of my career...