Big $\mathcal{O}$ Notation inquiry while approximating $\sum \frac{\log n}{n}$
I have a function $S(x)$ which is bounded over and also listed below as adheres to :
$f(x) + C_1 + \mathcal{O}(g(x)) < S(x) < f(x) + C_2 + \mathcal{O}(g(x))$ as $x \rightarrow \infty$
Can I end that $$S(x) = f(x) + C + \mathcal{O}(g(x))$$ as $x \rightarrow \infty$? Can any person offer a brief evidence for this reality?
EDIT :
(To clear the complication, I am mentioning the trouble listed below. I am asking yourself if what I did was right.)
Confirm that as $x \rightarrow \infty$
$$\displaystyle \sum_{n \leq x} \frac{\log n}{n} = \frac{(\log x)^2}{2} + C + \mathcal{O}(\frac{\log(x)}{x})$$
where $C$ is a constant.
This is just how I did it.
Approximate the summation as an indispensable $\int \frac{\log(x)}{x} dx$ from above and also from below. (The common means of thinking of bounds for $p$ collection). After that we will certainly get the lower bound as $\frac{\log([x])^2}{2} + C_1$ and also the upper bound as $\frac{\log([x])^2}{2} + C_2$.
After that, $\log([x]) = \log(x) + \log(\frac{[x]}{x}) = \log(x) + \log(1-\frac{\{x\}}{x}) = \log(x) + \mathcal{O}(\frac{\{x\}}{x}) = \log(x) + \mathcal{O}(\frac{1}{x})$.
Connecting this right into the above I get the the inquiry I asked.
I did this when I was asked on a timed test given that I can not assume of various other means to reach the solution. Various other solutions and also pointers invite as constantly.
I do not assume so due to the fact that $g(x)$ could be also tiny. Take $f(x)=0, C_1=-1.5, C_2=1.5, g(x)=\exp(-x)$. After that does not $S(x)=\sin(x)$ breach it?
A false impression requires to be cleared below. Remember that $f(x) = O(g(x))$ suggests that there exists a constant $C$ such that $|f(x)| \le C |g(x)|$ in an area of whatever value of $x$ you want. It is a misuse of symbols that we make use of the equal rights icon below in all ; $O(g(x))$ is not a function, so it can not be "equal" to $f(x)$ in the common feeling. See the conversation at the Wikipedia article.
Based upon this definition, individuals occasionally claim that $f(x) = h(x) + O(g(x))$ if $f(x) - h(x) = O(g(x))$. This is an added misuse of symbols, yet it usually does not create troubles and also can be exceptionally hassle-free. I presume you can write $f(x) < h(x) + O(g(x))$ if you actually intended to, yet this is repetitive, given that the inequality is currently constructed right into the definition of large - O symbols.
Yet I have no suggestion what $f(x) > h(x) + O(g(x))$ is intended to suggest. If you are attempting to claim that there is a constant $C$ such that $|f(x) - h(x)| > C |g(x)|$, the proper means to write this is making use of large - Omega, not large - O, as $f(x) = h(x) + \Omega(g(x))$.
Since you desired a various evidence strategy, you can attempt making use of Abel's Identity, which has actually become fairly valuable in analytic number theory.
As an example see this : An estimate for sum of reciprocals of square roots of prime numbers.
To use this to your inquiry :
Since we understand that
$\displaystyle \sum_{1 \le n \le x} \frac1{n} = \log x + \gamma + R(x)$, where $\displaystyle R(x) = \mathcal{O}\left(\frac1{x}\right)$
Using Abel is identification we have that
$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = (\log x+ \gamma + R(x))\log x - \int_{1}^{x} \frac{\log t+ \gamma + R(t)}{t} \ \text dt$
i.e.
$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = \frac{\log^2 x}{2} + R(x)\log x - \int_{1}^{x} \frac{R(t)}{t} \ \text dt$
Since $\displaystyle R(x) = \mathcal{O}\left(\frac1{x}\right)$ we have that $\displaystyle \int_{1}^{\infty} \frac{R(t)}{t} \ \text dt = \eta$ exists. We additionally have that $\displaystyle R(x) \log x \to 0 \ \text{as} \ x \to \infty$.
Hence we have that
$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = \frac{\log^2 x}{2} -\eta + \mathcal{O}\left(\frac{\log x}{x}\right) \ \ \text{as} \ \ x \to \infty$
Another valuable strategy is to attempt making use of the Euler-Maclaurin Summation formula.
Related questions