Observers of my now-dead-trying-to-bring-it-up-alive-again blog, would have noticed that, I was *trying* to talk some mumbo jumbo, something of the effect of quantitative mathematics, in order to look intelligent.

Well, the same people might as well, realise, what I said or blogged for that matter, failed badly in my own scales. Quality wise? (well, yes) but it failed in its first count itself. It didnt make me look intelligent, 😀 coz it didnt look intelligible at the first place itself.

So, now, in order to prove to my dear patient readers, how honoured you are to be reading such an exalted-yet-unknown (mind you, he loves anonymity. You can follows his anonymity at @sohamdas) blogger, I am restarting the entire series itself.

So what I will be doing is, running at a very brisk pace through the very basics.

And yes, before I start. Most of us will have a working understanding of the same. But still, lets play ball.

The first thing, where we start is a random experiment: It has three important components:

1. multiplicity of outcomes
2. uncertainity regarding the outcomes
3. repeatability of the experiment

So, by that, boiling of water at room temperature meets criterion #3 but not #1 and #2. But tossing a coin, does. It has multiple outcome, you are uncertain about the outcome of each experiment and you can keep tossing the coing, till the coin holds or screams “mercy”.

Now, a sample space is the collection/set of all the possible outcomes of a random experiment. Now,by definition, a random experiment will have only those many outcomes as being espoused by the sample space(rather the opposite, but for the sake of continuation let us continue…), hence, you can “expect” each outcome to come up, equally, if the experiments continue a large number of times.Equally is a very tenuous word and its not always necessary the frequency has to be equal. But just that, we often see in daily lives, where random experiment is involved like, tossing a coin, rolling a dice, etc, is “usually” fair experiment.

So inherently what we tried to work upon was that each outcome has an equal frequency of coming up.But that need not be the case. You change the individual frequencies, you got biased outcomes. E.g chances of that girl accepting your proposal, chances of Shiv Sena fighting tooth and nail against the release of SRK movies.

And thus, if there are $n$ outcomes, $x_{1},x_{2},x_{3},...x_{n}$, then for each, there will be an associated frequency attached to it, which you can observe, if you repeat the random experiment sufficiently large number of times, say ${\large N}$

Now sum of all frequencies will and should yield the total number of times the experiment is repeated?

Hence 

And hence, the chance of an outcome turning up, is $f_{i}/{\large N}$. I hope it makes sense.

Now, if $f_{i}$ are all equal, then ${\large N}=n * f_{i}$.

Which means, chances of each outcome $=f_{i}/nf_{i}$ or $1/n$

Simple logic, dicey proposition.

In this blog, I touched upon what are discrete random variables. That is, countables. You can count the number of outcomes and work with them. In the next blog,we will see upon continuous random variables.