Continuous Variables and Eliciting Probability Distributions

Discrete random variables one which may take on only a countable number of distinct values such as 0,1,2,3,4,……..

Continuous random variables take any value between the intervals or any decimal value.

1.Continuous random variables exist, and they can take any value within an infinite range

2.The probability that a continuous random variable takes a specific value is zero

3.Probabilities from a continuous random variable are determined by the density function with this non-negative and the area beneath it is one

4.We can find the probability that a random variable lies between two values (e.g., a and b) as the area under the density function that lies between them

Some examples of basic distributions -

Continuous: normal, uniform, beta, gamma
Discrete: binomial, Poisson

​Elicitation

  • Bayesians express their belief in terms of personal probabilities
  • These personal probabilities encapsulate everything a Bayesian knows or believes about the problem
  • These beliefs must obey the laws of probability and be consistent with everything else the Bayesian knows
  • A member of the beta family is specified by two parameters, α and β
  • This is denoted as p∼beta(α,β)
Probability density function

Probability density function : — where 0≤p≤1,α>0,β>0, and Γ is a factorial: Γ(n)=(n−1)!=(n−1)×(n−2)×⋯×1

When α=β=1, the beta distribution becomes a uniform distribution, i.e. the probability density function is a flat line.

The expected value of p is α/α+β, so α can be regarded as the prior number of successes, and β the prior number of failures.

When α=β, then one gets a symmetrical pdf around 0.5

For large but equal values of α and β, the area under the beta probability density near 0.5 is very large

​Beta Family Illustration

Choice of prior distribution for the parameters plays an important role in the statistical models.

  1. Independence assumptions, etc. can have notable effects on the inference.
  2. A common point of contention for Bayesian methods
  3. Many philosophies about the priors — Subjective Bayes, Objective Bayes, Prior as part of the Model, etc.
  4. Statement of prior distribution makes assumptions more explicit
  5. A Bayes estimate is consistent as long as the true value in is the support of the prior

Conjugacy relation : — In mathematics, especially group theory, two elements and of a group are conjugate if there is an element. in the group such that. This is an equivalence relation whose equivalence classes are called conjugacy classes.

Conjugacy: —

Lets say, we have below mentioned points as the prior beliefs about the data:

  • Binomial distribution Bin(n, p) with n known and p unknown.
  • Prior belief about p is beta(α, β)

Then we observe x success in n trials, and it turns out the Bayes rule implies that our new belief about the probability density of p is also the beta distribution, but with different parameters.

Conjugacy occurs when the posterior distribution is in the same family of probability density functions as the prior belief, but with new parameter values, which have been updated to reflect what we have learned from the data.

We can recognize the posterior distribution from the numerator p^α+x−1and (1−p)^β+n−x−1

Everything else are just constants, and they must take the unique value, which is required to ensure that the area under the curve between 0 and 1 equals 1

So they have to take the values of the beta, which has parameters α+x and β+n−x

--

--

Sandeep Sharma
Sandeep Sharma

Written by Sandeep Sharma

Manager Data Science — Coffee Lover — Machine Learning — Statistics — Management Consultant — Product Management — Business Analyst

No responses yet