You are not making yourself clear.
This forum doesn't charge extra for punctuation marks, or for using the enter key.
To the extent that I can make any sense of this, you seem to be talking about regression to the mean and the gambler's fallacy.
In a large enough sample size, random events occur with the frequency that math predicts. For example, the chance of being dealt AA is 1 in 220, or 0.45%. In a very large sample of randomly dealt starting hands, about 0.45% of those hands will be AA.
However, that doesn't mean that every 220 hand slice of that large sample will contain one and only one AA. Some will contain several AA's. Some will contain none.
Let's say we take a sample size of one hundred thousand randomly dealt hold'em hole cards.
Before we deal the cards, Regression to the Mean validly predicts that no matter how AA's cluster in any given slice of our sample, by the end we are likely to have about (100,000 * .045) = 4500 AA's dealt.
The "gambler's fallacy" comes into play when we assign causality to regression to the mean.
Regression to the mean does not predict the number of events in any given slice of our sample.
In our sample of one hundred thousand hands, we can predict that 90% of the expected 4500 AA hands will be dealt when 90% of the total sample is complete. So when we reach 90,000 random hands, we should expect (4500*.90) = 4050 AA hands to be dealt.
What if only 4000 AA hands have been dealt? What is the most accurate prediction for the number of AA hands that will be dealt in the last 10,000 hands of our sample?
It is simply (10,000*.045) = 450 AA hands.
We should not adjust our prediction for the number of AA hands in the last 10,000 hands of our sample based on the prediction we made before the hands were dealt.
Instead, we should adjust our prediction for the number of AA hands in the whole 100,000 hand sample based on the number of AA hands that have been dealt so far.
The cards have no memory.
They do not know they are part of a 100,000 hand experiment.
They will just continue to be random.
Even though we started the process with a valid expectation that about 4500 AA hands would be dealt, Regression to the Mean DOES NOT predict that 500 AA hands will be dealt in these last 10,000 hands in order to "catch up to" our original expectation.
We are not "due" for an above average number of AA hands dealt in this last 10,000 hand slice of our sample. Believing that we are is the "gambler's fallacy".
TL;DR:
Random events, by definition, do not occur in cycles. They are usually not distributed evenly across a sample. They occur in clusters. Clusters are not necessarily part of a cycle, and for truly random events they never are cyclical. Cycles are predictable, and random events, by definition, are not.