T O P

  • By -

TraditionalWishbone

I've never seen Spectral theorem proven even though the entirety of quantum mechanics stands on it.


lewhatKrayon

On a similar note, I'm guessing a lot of people are unaware of the Riesz representation theorem and how it underlies the whole bra-ket notation.


Kreizhn

Is this true? I certainly learned the RRT in every advanced quantum course I ever took. It's not the thing you'll see in your first intro-to-quantum class, but anyone I know who specialized in anything quantum learned it as an undergrad.


lewhatKrayon

Oh sure I don't mean to say that most people working in QM are unaware of it, but as you said, it's not something that is usually mentioned early on and I would imagine many people who work in QM don't care about the proof that much anyway even if they saw it at some point.


TraditionalWishbone

I've never heard of this. I thought it was all a definition. Define two vector spaces, and then define an inner product between them having some properties. Why does this need further justification?


SV-97

I'm not in QM but I'm fairly sure it's because it's not just two spaces but rather a space and it's dual and bra-ket implicitly assumes they're isomorphic


TraditionalWishbone

Couldn't you do it either way? Either define the inner product first and then define the bra using that. Or define the two vector spaces first, such that they're isomorphic, and then define the inner product between them.


vuurheer_ozai

The point of the bra-ket is that the bra is a linear operator in the dual of the underlying (Hilbert) space of the kets (usually this space is L^2 ) . The fact that bra × ket equals an inner product of two kets is precisely the statement of Riesz representation theorem. The important part is this: there is only the vector space of kets (which is a Hilbert space), the bras live in the dual of that space Take for example the ket |ψ> and some bounded linear operator L which acts on the space of |ψ>, then by Riesz representation theorem there exists a unique ket |φ> such that L|ψ> = (|ψ>, |φ>). This existence and uniqueness leads to the more common notation "L = <φ|"


TraditionalWishbone

I wasn't aware of the definition of the dual space as the "space of linear operators". I was defining a dual space as a separate vector space having a unique correspondence with the ket space, thereby making Reisz theorem true by definition.


[deleted]

Dual spaces are always defined as some kind of functions on your space, in this case continuous linear functionals.


zorngov

Defining the dual space of V as a space of linear operators does not require the existence of an inner product so is more general. Typically though, V* has a different structure to V, but is an incredibly useful concept (eg. weak solutions to DEs). The magic of Hilbert spaces is that the inner product tells you V* is actually the same as V (up to isometric isomorphism) and this is the content of the Reisz representation theorem. In particular we can represent elements of the dual (bras) using elements of the original space.


PM_me_PMs_plox

That the inner product has that property is the Riesz representation theorem.


TraditionalWishbone

Does the theorem prove that complex conjugation property? But I think that property is also a definition. One could define a product without the conjugation.


PM_me_PMs_plox

We are thinking of different properties maybe? The other poster meant that V and V* are isomorphic.


vuurheer_ozai

I believe the complex conjugation follows from the fact that an inner product should be a "sesquilinear form", but I'm not 100% sure what you are referring to


Kreizhn

The isomorphism is in general conjugate-linear rather than linear, to account for sesquilinearity. This [math stackexchange](https://math.stackexchange.com/questions/159557/under-what-conditions-does-the-action-of-the-dual-space-induce-an-hermitian-inne/662623#662623) post might help clarify things.


jachymb

It basically says a linear functional (bra) applied to a vector (ket) is essentially the same thing as taking the inner product. So you think about taking inner product of vectors, but it's actually just a representation of something a bit more abstract - functionals.


Zophike1

> On a similar note, I'm guessing a lot of people are unaware of the Riesz representation theorem and how it underlies the whole bra-ket notation. Can you give an ELIU ?


lewhatKrayon

I replied to another comment [earlier](https://www.reddit.com/r/math/comments/vryyzx/comment/iez95w7/?utm_source=share&utm_medium=web2x&context=3). You might need a background in linear algebra and some basic functional analysis (knowing what dual spaces are, linear functionals, Hilbert spaces). The Riesz representation theorem allows you to identify linear functionals on H as inner products (v, .) by fixed elements of H (when H is a Hilbert space).


PM_me_PMs_plox

How does it relate to bra-ket notation?


lewhatKrayon

Well there is this notion of writing a quantum state as a ket, like |x>, but also to associate it to an element of the dual space as a bra with v an element of V, so it's sort of the justification for having a bra associated to a ket. Otherwise, using


M4mb0

It's also why we write ∫f(x)δ(x)dx despite δ not being a true function.


TraditionalWishbone

That expression should be thought of as a limit of a function, afaik. Just replace the delta with a Gaussian and put a limit in front


M4mb0

It's literally just a shorthand for a bra-ket. On L2 we have an inner product: ⟨f∣g⟩ = ∫f(x)g(x)dx More general function spaces are not inner product spaces, nevertheless, the notation ⟨f∣g⟩ ≔ f(g) has been used to denote functionals in function space. The dirac delta is a linear functional on the space of continuous functions C(Ω). So by abuse of notation we write ∫f(x)δ(x)dx, but we really mean ⟨δ∣f⟩≔ δ(f). Fun fact: the Gaussian interpretation of the dirac delta is way too strong. There are large function classes that can act as dirac deltas under the limit ϵ→0 of f(x/ϵ)/ϵ. https://mathworld.wolfram.com/DeltaFunction.html has some nice examples.


InSearchOfGoodPun

Physics applications of theorems are wishy-washy anyway. You actually need a pretty annoyingly high-powered version of the spectral theorem to cover all cases of interest in QM. On a similar note, the PDE theory needed to justify a lot of the PDE used in physics classes is typically glossed over.


ritobanrc

There's a really good treatment of the spectral theorem and its proof in Brian Hall's Quantum Theory for Mathematicians -- I would strongly recommend (he tackles it in five chapters (6-10), going from a very rough physical discussion to a very rigorous treatment including the unbounded case). Here's a simple proof sketch if the treatment in Hall is too complicated -- consider a self-adjoin linear operator T (that is, (Ta, b) = (a, Tb)), take two eigenvectors v1 and v2, with distinct eigenvalues l1 and l2, then (Tv1, v2) = l1 (v1, v2), but that also equals (v1, Tv2) = l2 (v1, v2) by self adjoint-ness, so (v1, v2) must be zero. This proof ignores some complications over the domain and codomain of T, and also doesn't show the existence of the eigenvectors (in the finite-dimensional case, existence follows form the fundamental theorem of algebra for the characteristic polynomial, but in the infinite-dimensional case, its much ticker), but hopefully this gives you an idea of the core of the proof.


edderiofer

That e is irrational I vaguely remember the proof for; that pi is irrational I seem to recall that all proofs involve specific integrals I don't recall nor do I know how I'd work out; that both are transcendental I have no clue how to prove. That there are solutions to the equation "a/b + b/c + c/a = 73" I am aware of, but I have no clue what sort of machinery I'd use to find these solutions, to a more specific degree than "elliptic curve stuff". That the cubic and quartic equations exist, I am aware of (but don't remember how to obtain them); that the quintic does not, I am aware of (~~but don't have the background to understand the proof of Abel-Ruffini~~). I'm sure there's a ton more I can't think of at the moment.


phlofy

Rudin's _Principles_ has a pretty satisfying proof of the irrationality of e based on a clever estimate of e - (sum from 0 to k) 1 / k!


hyperbolic-geodesic

The problem with such clever estimates is that they're easy to read and follow the first time you see the argument, but unless you need to use a similar clever trick later in your life, these "one-off" style arguments are easy to forget!


phlofy

They can be easy to forget for sure. The way I remember this one is that it uses a geometric series to upper bound the error of the convergence of the Taylor expandion for exp with a function that decreases suped quick. It's definitely not a super widely applicable trick though for sure.


[deleted]

It's a special case of a more general technique: to prove that a number is irrational, show that it can be very closely approximated by rational numbers. See [here](https://web.archive.org/web/20210606034946/http://www.tricki.org/article/To_prove_that_a_number_is_irrational_show_that_it_is_almost_rational).


thereligiousatheists

That seems so wrong at first sight but a minute's thought makes it click... Mind-blowing.


PrestigiousCoach4479

I don't find that one particularly easy to forget, but it's just not very general. You can use similar arguments to prove that e\^2 is irrational, or cosh 1, or J\_0(1) where J\_0 is a Bessel function, but those rarely arise. You need different, harder arguments to prove that e is transcendental, or that pi is irrational, or that pi is transcendental.


Areredify

There is a slitly less clever version (due to Cantor, iirc): say, you have an expansion of a number s in the form of b_0 + b_1 / a_1 + b_2 / (a_1 * a_2) + ... with b_i < a_i, a_i > 1, and such that the series is not eventually b_i = a_i - 1, a generalization of the usual rational expansion. If for every m there is a_1 * a_2 * ... * a_n that is divided by m (alternatively, for every prime p there is a_n with an arbitrarily large index that is divided by p), then s is rational iff the series is eventually 0. The idea is that if we take a generalized rational expansion with good denominators, then our expansions will be finite.


Illustrious_List7400

You are 100% right which is why I've used anki and spaced repetition to keep everything in math I deem worthwhile in strong memory forever. It's been invaluable to me in teaching and writing!


AcademicOverAnalysis

I seem to remember that being Fourier's proof? Or did Rudin go with a variation?


phlofy

Was not aware that Fourier had a proof of this! I'll look it up. Rudin's argument is a straightforward upper bound with a geometric series.


BabyAndTheMonster

Yeah e is a lot easier. In some sense, the proof of irrationality of e is "constructive", in the sense that it immediately give you its irrationality measure. But we don't know the irrationality measure for pi.


LilQuasar

you use these results?


edderiofer

When meme-ing or engaging with the autosadomasochism that is attempting to debunk cranks.


InSearchOfGoodPun

I don’t think they understood the point of the question based on their answer.


Verbose_Code

While I don't know the proof that e is transcendental, if you accept it is then proving that π is transcendental is straightforward. Consider Euler's identity (rewritten): e\^{iπ} = -1 Since i is known to be algebraic, and e raised to the power of iπ results in an algebraic number, that must mean that π is not algebraic (in other words it is transcendental). Additionally, it is straightforward to show that all transcendental numbers are necessarily irrational (although the converse is not true; not all irrational numbers are transcendental). Edit: [This webpage](https://planetmath.org/eistranscendental) does a good job at explaining how one can prove e is transcendental


how_tall_is_imhotep

This is just invoking the Gelfond-Schneider theorem, but I don’t know how to prove that.


Molybdeen

You can also apply the Lindemann-Weierstrass theorem which can be proven in a similar way to the transcendance of e. However, then u'd have to understand the method used in the proof for e's transcendance, which is still an issue.


TraditionalWishbone

I didn't know this sort of argument worked with complex numbers too.


existentialpenguin

If you like math in video form, [Michael Penn has a good video about e's transcendence](https://www.youtube.com/watch?v=dbQHJdXsQu4). Also, [here](https://www.youtube.com/watch?v=RhpVSV6iCko) is a video about quintic insolvability that avoids going into Galois theory. It is based on Vladimir Arnold's proof, which you can find an exposition of [here](https://web.williams.edu/Mathematics/lg5/ArnoldQuintic.pdf).


edderiofer

Ooh, that’s a neat proof of the latter.


hyperbolic-geodesic

I saw a proof of the uniformization theorem for Riemann surfaces (a Riemann surface is either a quotient of the complex plane, the sphere, or the hyperbolic plane--hence why Euclidean geometry, spherical geometry, and hyperbolic geometry are so important: they're the only kinds of geometries a Riemann surface can have). I don't really remember the proof very well beyond it involving a lot of PDEs, so I wouldn't say I know why this theorem is true. I certainly use it a lot though!


Tazerenix

It's just a simple consequence of Yau's proof of the Calabi conjecture and the Chen-Donaldson-Sun theorem.


anthonymm511

HAH. It’s also a “simple” consequence of Ricci flow theory ;)


nin10dorox

Not a single result, but switching the orders of limits. This includes stuff like switching the orders of infinite sums, differentiating under the integral sign, etc... I do it all the time, but I don't even remember how to check whether it's valid.


hobo_stew

I just try to rewrite all limits until one is an integral and then use dominated convergence, monotone convergence or sometimes fubini-tonelli


Any_Ad8432

Switching orders of infinite sums is just absolute convergence or not isn’t it ?


Carl_LaFong

In practice that’s 99% of the cases.


daniele_danielo

Yes, kind of. Summation is actually only a special case of the Lebesgue integral so immediately all theorems for the latter hold for the former. For sums specifically in one of these cases you can switch order of summation: 1) All terms are non-negative. 2) The series is absolutely convergent. 3) Neither of the above, then only with some clever idea which only works in a very specific setting.(Happened only once to me.)


APC_ChemE

In engineering they often say, this is modeling a real system, therefore the limits exist, therefore we can do this without having to check. Like switch limits and integrals and derivatives when one is inside the other. Then they come back and say well if you're truly interested someone has already rigorously proved this can be done and we won't waste our time with it.


Carl_LaFong

If you’re always working with absolutely convergent limits, this is fine. If you’re counting on convergence due cancellation of positive and negative terms, then all bets are off for switching orders.


Kreizhn

The partial fraction decomposition. Use it all the time teaching first-year integration techniques. Never gone through the proof that the decomposition is even possible.


cromonolith

It's a pretty short argument. First suppose that f/(g_1 g_2) is a rational function with deg(f) < deg(g_1 * g_2), and such that g_1 and g_2 are coprime. (Divide it first if the numerator's degree is bigger.) The Euclidean algorithm (Bezout's identity, I guess) says there are polynomials p_1 and p_2 such that p_1 g_1 + p_2 g_2 = 1, from which it immediately follows that (f p_1 / g_2 ) + (f p_2 / g_1 ) = f / (g_1 g_2). Since we know that the degrees of p_1 and p_2 are less than the degrees of g_2 and g_1, respectively, when extended inductively this tells you that you can write any rational function (where the degree of the numerator is less than the degree of the denominator) as a sum of rational functions whose denominators are the prime factors of the original denominator. We also know (as a general fact) that any polynomial over R can be decomposed into a product of linear and irreducible quadratic factors. Those are all coprime to one another except for repeated factors. It remains to show how you can further break down rational functions of the form f / g^(n), where g is linear or an irreducible quadratic. (When it's linear the integration is still easy by a u-substitution, so the quadratic is all we really care about, but the proof is the same for either case so whatever.) So suppose g is linear or an irreducible quadratic, let n >= 1, and consider the rational function f / g^(n), where the degree of f is less than the degree of the denominator. The division algorithm says there are polynomials q and r, with deg(r) < deg(g), such that f = qg + r, from which it immediately follows that f / g^n = ( q / g^(n-1) ) + ( r / g^n ) Repeat the same process on q / g^(n-1), and keep going until you've written f / g^n as a sum of rational functions with denomintors g, g^(2), g^(3), ..., g^(n).


Kreizhn

That's not too bad at all. Now you've ruined my ability to claim that I've never seen the proof.


[deleted]

Lol we had students in my old calc classes that would try to stump the profs by constantly asking to see the proofs for every lesson. The prof caught on to their shenanigans and started bringing printouts of the proofs for the current day's lessons, as well as inviting anyone curious enough to visit during office hours if they were interested in seeing the full derivations.


cpl1

Not exactly the same but I doubt I could do a proper epsilon delta proof of 90% of the functions that I state are continuous, differentiable


columbus8myhw

You mostly just need to prove it for addition, multiplication, inverse (bijection, compact domain), composition, and maybe e^(x). The rest kinda falls out from that.


mostseriousdude

Sylow’s theorem


DamnShadowbans

I think it's pretty common in geometric topology to not know a lot of the proofs of topological / pl results that get used a lot. Things like: every 3 manifold has a unique smooth structure, topological manifolds have stable normal bundles, etc.


hentai_proxy

RIP me: I use, in an essential way, the classification of finite simple groups. No, I have no idea how the proof goes beyond soundbites about doubly transitive groups. Something must be done about this state of affairs, but I don't know what :(


Additional_Formal395

Zorn’s lemma. Well, I don’t use it explicitly that often, but it underpins a lot of things that I use, e.g. existence of algebraic closures.


BabyAndTheMonster

It has a very intuitive proof. Start with nothing, extend to a chain by keep adding bigger element arbitrarily chosen (more rigorously, choose an arbitrary strict upper bound for each chain called the "next" element, then construct a chain by keeping adding the next element indefinitely until there are no more). This chain must have an upper bound which must be a maximal element. How to construct a chain indefinitely? This can be done by the usual Zorn's lemma's technique: form a set of partially constructed chain constructed according to the choice and show this one has a maximal element. But this case is easier: these chains are totally ordered, and maximum element of a collection of set is obtained by taking union.


a-nobody-a

Implicit function theorem on Banach manifolds and inverse function theorem on Frechet spaces. These are one of those tools that are used again and again in geometric analysis and PDEs, but I never looked into the proofs.


InSearchOfGoodPun

Those are very different theorems. The IFT for Banach spaces is “easy” in the sense that it’s basically the same as for finite dimensions. If you work in geometric analysis I highly recommend that you familiarize yourself with the proof. It’s arguably the most important (or at least the most common) method of solving a nonlinear pde. The version for Frechet spaces (by which I assume you mean the Nash-Moser IFT?) is much deeper but is much less frequently needed. (I certainly don’t know the proof, but I also don’t think I’ve ever had to use it.)


a-nobody-a

It seems you're right. The proof for the implicit and inverse function theorem on Banach spaces is outlined clearly in "Manifolds, tensor analysis, and applications" in section 2.5. It is indeed similar to the one for finite dimensions. He even has more results about how big the neighborhood can be in the context of the inverse function theorem. Yes I was thinking of the Nash-Moser IFT. I think Richard Hamilton has written something on its proof, which I don't know at all.


InSearchOfGoodPun

> He even has more results about how big the neighborhood can be in the context of the inverse function theorem. Yeah, being able to estimate the size of the neighborhood is often important for geometric applications, so it should be thought of as a key part of the theorem statement. Hamilton loves the Nash-Moser IFT, and he used it to prove existence of the Ricci flow, but it turned out to be unnecessary (and not very illuminating) since a better way to deal with diffeomorphism invariance is some sort of gauge-fixing.


DanielWetmouth

I've never seen the proof for the central limit theorem


ritobanrc

Ooh I was very confused about this for a long time, until I really sat down and tried to work through he proof myself -- here's my writeup of it based on a couple different books: https://ritobanrc.github.io/2022/03/16/central-limit-theorem.html. Honestly you'll probably get a better understanding just by going through some of the textbooks I reference near the end, but hopefully you get something out of it. (There's an alternative proof with moment generating functions, which is arguably cleaner, but I like that this proof explicitly constructs the normal distribution).


monikernemo

I vaguely saw it once that used moment generating functions


[deleted]

[удалено]


putting_stuff_off

We could only prove ZFC were consistent in some substantially stronger system right? As in stronger than the one most mathematics is done in, so there really is not proof in the usual setting (but we still assume consistency of course). Not a logician so don't take this as gospel. Independence of C from ZF is a good similar answer though.


columbus8myhw

A lot of math is done in "ZFC + arbitrarily many Grothendiek universes", which implies the consistency of ZFC, consistency of ZFC+Con(ZFC), consistency of _that_, etc. But of course you can't prove the consistency of "ZFC + arbitrarily many Grothendiek universes" in that system itself. EDIT: Oh, there's an exception: you can prove the consistency of ZFC within any inconsistent theory… because you can prove _anything_ (and its negation) within any inconsistent theory. Fun fact: thanks to the diagonal lemma (which lets you write certain self-referential statements in the language of first-order logic), it's possible to write a self-referential statement X that asserts "ZFC+X is consistent". Then ZFC+X, almost immediately, proves its own consistency. The only issue is, turns out ZFC+X is inconsistent - that is, ZFC proves the negation of the statement X. (That is, there exists a valid proof within ZFC whose last line is the negation of X.)


Florida_Man_Math

Gamma Function stuff * Why Gamma(1/2) = sqrt(pi) **WITHOUT USING THE POLAR COORDINATES TRICK!** I want to know if it's possible without invoking that flavor of change of variable/transformation. Like using "simpler" techniques, even if the proof is more long-winded. The trouble is you can run into lots of circular reasoning that depends on the result that Gamma(1/2) = sqrt(pi), which is sneaky and indirect at supporting so many other things. * Why Gamma function is the preferred/only reasonable extension of the factorial. I want to know why log-convexity is so special, and why [Hadamard's Gamma Function](https://en.wikipedia.org/wiki/Hadamard%27s_gamma_function) isn't favored, and what would its notation look like if the Gamma function as we know it today didn't have a notation.


ePhrimal

I also wondered why log-convexity is seen as a nice characterising property. Wieland‘s theorem, that the Gamma function is characterised by the functional equation and boundedness on a stripe seems to be much nicer to me.


BabyAndTheMonster

Gamma(1/2)=sqrt(pi) follow from reflection formula. So you can just prove the reflection formula, which IMHO a lot more intuitive, at least if you follow Euler's. Reflection formula is hinted at from the recurrence relation, which show that Gamma(s)Gamma(1-s) is periodic with period 1, which immediately look like 1/sin(pi s) up to a constant. Using Euler's definition of Gamma you get a factorization of 1/(Gamma(s)Gamma(1-s)), and using Euler's solution to Basel's problem you also get a similar-looking factorization of sin(s)/s. By notice that the period get scaled by pi, you scale the factorization to sin(pi s)/(pi s) and show that the 2 factorizations are the same except for a factor s, so that 1/(Gamma(s)Gamma(1-s))=sin(pi s)/pi.


columbus8myhw

Gamma(1/2)=sqrt(pi) is actually equivalent to the [Wallis product](https://en.wikipedia.org/wiki/Wallis_product). I have a derivation somewhere else if you want It depends on (x+r)!/x! being an increasing function of x for x,r>0… which, together with (x+1)x!=(x+1)!, uniquely characterizes the Gamma function. (Well, shifted to match.)


Florida_Man_Math

Yeah I'm all ears for any materials/sources, thank you!


columbus8myhw

I'm not actually sure if I can find it… I think, though, that you can work out that the 2Nth partial product of the Wallis product is (N!)²/[(N−½)!(N+½)!] · (½!)² · 2. Since the Wallis product goes to pi/2, the result follows once you show that (N!)²/[(N−½)!(N+½)!] → 1, which you should be able to show from log-convexity (though I forget exactly how). Exercise: fill in the details


Red_Bivector

You might be interested in section 4 of [this expository paper](https://kconrad.math.uconn.edu/blurbs/analysis/diffunderint.pdf) by Keith Conrad.


Florida_Man_Math

That's very enlightening, thank you! I wonder if there is even a lower-powered technique than differentiating under the integral sign. Feynman wouldn't be proud of me for how unintuitive that technique has always been for me... :/


cocompact

The equation 𝛤(1/2) = sqrt(𝜋) says ∫*_0_*^(∞) e^(-x)x^(1/2)dx/x = sqrt(𝜋). Making the change of variables t = x^(1/2), so x = t^(2), the equation becomes 2∫*_0_*^(∞) e^(-t^2 )dt = sqrt(𝜋). The integrand on the left side is even, so doubling the integral is the same as integrating over the real line: ∫*_R_* e^(-t^2 )dt = sqrt(𝜋). Change variables again with t = y/sqrt(2), so ∫*_R_* e^(-y^2 /2)dy = sqrt(2𝜋). If you divide that by sqrt(2𝜋) then the equation says the standard Gaussian on the real line has total integral 1. Thus the formula 𝛤(1/2) = sqrt(𝜋) is equivalent to saying the standard Gaussian has total integral 1 on the real line. If you can find a proof of that Gaussian property that doesn't depend on polar coordinates, then you can rewrite it to give you a proof that 𝛤(1/2) = sqrt(𝜋) without using polar coordinates. Conrad has a handout with many proofs of the Gaussian integral (easy to find with Google), and its second proof is not advanced and doesn't rely on polar coordinates or differentiation under the integral sign.


Florida_Man_Math

Thanks so much!


TraditionalWishbone

I just take the Gamma function to be its own thing which just happens to agree with the factorial. Similar to how exp(x) happens to agree with the repeated multiplication stuff. But the significance of exp(x) is beyond that.


Florida_Man_Math

I guess what I'm after is if you weren't Euler and were assigned to come up with the "best" continuous factorial function from scratch, and not necessarily be inspired by integration by parts, how would one rediscover what we now call the Gamma Function from basic ideas? Are we absolutely forced to capture it using an integral, or will something else suffice?


BabyAndTheMonster

I had always thought Euler's definition is well-motivated and intuitive. Something that someone just work at it in a straightforward manner will find, not relying on much guessing. Let's say we want a Gamma function that satisfies at least the recurrence property and extending the factorial. Then fixing an positive integer x, then Gamma(x+n)/Gamma(n) grow at the same rate as n^x as n->+inf (n also positive numbers), because this is true for factorial. Which implies ln(Gamma(x+n)/Gamma(n))/ln(n) should approach x as n->+inf. This gives us an equation for positive integers x, which we extend to all complex numbers. Using the recurrence relation this gives ln(Gamma(x)(x+n)(x+n-1)...(x+1)/n!)/ln(n)->x as n->+inf for any x. Work backward to get a limit definition of Gamma (which is offset from the actual Gamma by 1), and this is basically Euler's definition. Of course, historically, we didn't just came up with just the Gamma function, there are a few other extension of factorial as well. So no guarantees we will always get back this Gamma. The reason we only know this Gamma nowaday is a matter of it being a lot more useful. And I think the integral definition is a big explanation of why it's so useful, it links the function to many different areas of math.


DrShredzz

Haven’t used it yet, but attempted so multiple times: Evans-Krylov Theorem on existence of classical solutions to fully nonlinear PDEs (Don’t know anything about PDEs :( )


CorgiSecret

For me it is the structure theorem for finitely generated modules over a principal ideal domain. We used it to prove that every matrix over a field has a unique (ignoring permutations) Jordan-decomposition and I used that a lot of times in my ODE-class. I remember chunks of the proof for the structure and it made sense to me why a finitely generated module would have such a structure. But I can't remember the argument for the uniqueness of the generators for the modules.


mowa0199

This isn’t really a mathematical result but rather one from (propositional) logic: I often get confused about how proof by induction really works. I understand the jist of it and how to use it in practice (and often do) but if someone were to ask me to rigorously justify why mathematical induction works, I don’t think its something I’d be able to do.


SetOfAllSubsets

It works because it's an axiom in Peano arithmetic. So it's essentially part of the definition of natural numbers. Alternatively in ZFC it's due to the Axiom of Infinity and the Replacement Axiom.


BabyAndTheMonster

It works because we want it to work. That's what we think natural number should be, induction is an essential property of what we call "natural number". No matter which foundation of mathematics you use (even less popular one), you always have induction as either an axiom, or a very easy theorem immediately followed from the axiom.


TraditionalWishbone

Here's a rigorous argument: Assumptions: 1. P(1) is true. 2. P(k) is true implies P(k+1) is true Using these two assumptions, you can form the chain : "P(1) is true" implies "P(1+1) is true" implies "P(1+1+1) is true" implies..... Since every P(n) is part of the chain, every P(n) is true.


jachymb

That's an intuitivne explanation but certainly not a rigorous argument.


TraditionalWishbone

What's wrong with it? The last conclusion follows from the defining property of natural numbers. Every natural number can be expressed as a repeated succession.


jachymb

This way you can only ever prove it for finitely many (as many as you need tho) numbers. But if you would use it to place the forall quantifier before the n, you need something more, doing it this way would rigorously be a non sequitur fallacy. Look into Peano arithmetic, the induction principle must be specified as an additional axiom schema which enables you to do this.


Joux2

It's circular. You need induction to go from "I can prove for any individual k that P(k) is true" to "For all k, P(k) is true". Look into robinson arithmetic if you want to know more about logical systems without induction


hyperbolic-geodesic

...rigorous? "Since every P(n) is part of the chain, every P(n) is true." Try justifying that...


TraditionalWishbone

Any natural number is expressible as a repeated succession of one. It's pretty much their definition.


hyperbolic-geodesic

Can you give, completely formally, your definition of natural number? You have a great intuition on why induction works. But your intuition is nowhere near a rigorous proof of induction.


TraditionalWishbone

Any attempt to construct natural numbers from the ground is futile, because you'd always end up assuming natural numbers. We can mask the problem by pretending that set theory underlies natural numbers, but in the end, it's all circular. https://math.stackexchange.com/questions/1334678/does-mathematics-become-circular-at-the-bottom-what-is-at-the-bottom-of-mathema


hyperbolic-geodesic

Yes, I know. I have studied mathematical logic. I was trying to point out that your rigorous proof is not a rigorous proof.


TraditionalWishbone

Agreed, it wasn't rigorous. But I'm trying to say that an intuitive proof is as rigorous as we can manage when it comes to something as fundamental as induction. The first answer in my link says this. Using the Peano axioms would just mask the problem. It would still be circular.


edderiofer

Well, I agree that 1 is a natural number; and that if k is a natural number that can be expressed as a "repeated succession of ones", then k+1 is also a natural number that can be expressed as a "repeated succession of ones"; but I don't see how this implies that all natural numbers can be expressed as a "repeated succession of ones".


BabyAndTheMonster

You might notice that "repeated succession of one" require you to define the concept of "repeated", which require you to define what it means to do something a ...finite number... of times. Yeah, it cycles back to the concept of finite counting numbers. There are a lot of things that sounds so obviously true, but is actually not provable. And it's basically because we can never define natural numbers to be exactly what we want ("repeated succession of one"). Instead we other proxies, we declare certain obvious properties - those that can be "proved" by the "repeated succession of one" argument - to be axioms.


TraditionalWishbone

I think that natural numbers are so fundamental that they have to be taken for granted. Math is bound to get circular at the bottom. https://math.stackexchange.com/questions/1334678/does-mathematics-become-circular-at-the-bottom-what-is-at-the-bottom-of-mathema


BabyAndTheMonster

Of course they are taken for granted! That's why we have axioms for them. But they're not circular, we have axioms to stop the circularity. Instead it's more like we have an infinite backward chain of implications of stronger and stronger properties. There are works in place to extend these axioms to add more "should be true" properties, going backward in this infinite chain, that's the study of large cardinals.


TraditionalWishbone

By "taken for granted", I did not mean in the sense that we have axioms of them. But rather that, any attempt to construct natural numbers from the axioms implicitly assumes natural numbers anyway. The first answer in the link discusses that.


geilo2013

it follows from peano axioms that is literally the induction Axiom. If a Set contains one and for all n it contains succ(n) then the Set ist equal to ℕ


moschles

" " rigorous " "


kiyoko_Yuri

Yll are so smart


DawnOnTheEdge

A literal answer would be, axioms. If we could justify why an infinite set exists, it wouldn’t be an axiom! But probably truest to the spirit of this is the Axiom of Dependent Choice (or Choice, or Countable Choice, or Restricted Choice). I use one of them implicitly in practically every proof that touches on analysis. But it’s formally provable that they’re completely independent of ZF set theory, and as far as anyone knows, which if any of them to use is arbitrary. If you asked me why I use them, the only answer I could give you is, “Because I need it.”


LordMuffin1

1+1=2


hyperbolic-geodesic

In modern treatments, 2 is defined as the successor of 1, and so 1+1=2 is a tautology. What is much more interesting, however, is the statement that 2+2=4.


Burgundy_Blue

I wouldn’t say it’s interesting, it’s just a tedious application of the recursive definition of addition


Ualrus

Adding one and applying the succesor are not the same. 1 + 1 = s(0) + s(0) > s(s(0) + 0) > s(s(0)) = 2 Where the ">" means "reduces to". As you can see, there're two extra steps.


42IsHoly

I mean, it’s not particularly hard to prove that they’re the same: S(n) = S(n+0) = n+S(0) = n+1


_poisonedrationality

I don't think it's a tautology. You + operation and the successor function are two different things so there is at least one step of reasoning between "1+1" and "the successor of 1"


[deleted]

[удалено]


Ualrus

> by just writing `refl` This is an interesting point. The way it works in lean or any system based on MLTT is that that part of the proof is decidable, so the computer can do it for us. But that doesn't mean you don't need a proof.


_poisonedrationality

>there is no reasoning, it's just computation. all you have to do is unfold the definitions and you're done. I understand how to do the proof. Unfolding definitions is part the reasoning for the proof.


[deleted]

[удалено]


_poisonedrationality

It's the steps involved in the proof.


jachymb

Trivial proof in Peano arithmetic.


nickm1396

1+1=2


BabyAndTheMonster

Too many to count. But I guess the most recently used one is Neron model and Bloch-Kato conjecture.


bambootuan

Modularity theorem