A random variableis a factor with an undetermined number or a program that gives numbers to each study's results.
The first step of estimating parameters\(\theta \) using the maximum is the estimator.
M.L.E:
Let random variables\({x_1},{x_2},...,{x_n}\) have joint pdf or pmb
\(f\left( {{x_1},{x_2},...,{x_n};{\theta _1},{\theta _2},...,{\theta _m}} \right)\)
Let random variables have joint pdf or pmb
where the parameters \({\theta _{i,}}i = 1,2,...,m\) are unknown. When function f is a function of parameters\({\theta _{i,}}i = 1,2,...,m\) , it is called the likelihood function. Values\(\widehat {{\theta _i}}\) that maximize the likelihood function are the maximum likelihood estimates or equally values \(\widehat {{\theta _i}}\)for
\(f\left( {{x_1},{x_2},...,{x_n};\widehat {{\theta _1}},{{\widehat \theta }_2},...,{\theta _m}} \right) \ge f\left( {{x_1},{x_2},...,{x_n};{\theta _1},{\theta _2},...,{\theta _m}} \right)\)
For every\({\theta _{i,}}i = 1,2,...,m\) . By substituting\({X_i}\,\,\) with\({x_i}\) , the maximum likelihood estimators are obtained.
The sampling is from the Exponential Distribution function is
\(f\left( {{x_1},{x_2},...,{x_n};\beta } \right) = {\beta ^n}\,{e^{\beta y}},\)
\(y = \sum\limits_{i = 1}^n {\,{x_i}} \)
By maximizing the log of the likelihood function, the maximum likelihood estimates are more accessible to
\(\begin{aligned}{l}L\left( \beta \right) &= \log \,ff\left( {{x_1},{x_2},...,{x_n};\alpha ,\beta } \right)\\ &= n\,\log \beta - \beta y\end{aligned}\)