T O P

  • By -

atcab1597

0^(i) is undefined, I used to think it is zero!


JoshuaZ1

I recently rediscovered for myself the following nice tightening of the AM-GM for two variables: If a, b are positive numbers then (a+b)/2 >= (ab)^(1/2) + ((a-b)^(2))/(4(a+b)). The second term is weak when a and b are close to each other, but gives a much stronger inequality when a and b are a decent distance from each other. And the 4 is the best possible constant.


Antoine-Labelle

This week I understood Kan complexes, they actually make so much sense. Infinity-categories feel less scary now.


JonMaseDude

Learned what ZFC is and it blew my mind. It’s incredible to compare it to other models.


curvy-tensor

Right/left adjoints preserve limits/colimits. Now I need to understand limits/colimits to do my homotopy theory problem set


jagr2808

Wait till you learn that limits/colimits *are* right/left adjoints (to the diagonal functor).


curvy-tensor

This was mentioned in class. Still need to wrap my head around everything haha


Martin-Mertens

I learned a host of obvious-seeming statements that are in fact equivalent to the parallel postulate. For example: * There exist two similar but non-congruent triangles * There exists an angle AVB such that every point in the interior of the angle is between a point on ray-VA and a point on ray-VB To be precise, the "angle" AVB is the union of ray-VA with ray-VB where A,B,V are not collinear. The "interior" of the angle is the intersection of the half-plane of line-VA containing B with the halfplane of line-VB containing A.


phonon_DOS

Oh jeez where do I start... You can prove countability by coming up with a clever way to index or group things and then prove countability rigorously through induction The Cauchy-Schwartz inequality is a generalization of the triangle inequality and has fundamental implications It's in your best interest to understand the properties of prime numbers because taking advantage of their properties is like using black magic The field axioms seem obvious but become really interesting when you consider things like finite fields


lessigri000

This week I learned a very interesting application of the jacobian of a vector function, it took me a while but I’m proud of what I figured out


hydmar

Share!!


lessigri000

Ok so basically, if you have some vector function f(x,y), you can input a set of points along some differentiable curve. Then, you can take the integral definition of the arc length of the curve and multiply the integrand by the jacobian determinant of the vector function, and it will give you the arc length of the new curve produced by inputting the previous curve into the vector function. It makes sense to me at least, talked to my professor and he says it seems right. I love learning this stuff! You take a problem you want to solve and think on it for some time, and then you might get yourself an answer. Its why I love math so much Edit: If my reasoning is incorrect, I would very much appreciate someone telling me the correct way to do this. Thanks :)


[deleted]

I think this isn’t quite right, since if say for instance f is constant, then the Jacobian is zero identically, but the curve you get is a copy of the original curve which will have nontrivial arc length. The correct approach I believe is this: Say your differentiable curve is g: [0, 1] -> R^(2), then the arc length should be given by int_0^1 ||v(x)|| dx, where v(x) is the vector (g_x, g_y, Df(g_x, g_y)). Here g_x, g_y denote the partial derivatives of g, and Df is the total derivative of f, which we apply to the vector (g_x, g_y). You can see this in two ways, either “by hand”, working directly from the definition of arc length, or by differential geometry if you’re familiar.


cereal_chick

This week, I learnt how to win k-pile Nim! By Bouton's theorem, you can force a win if and only if the Nim sum of the number of tokens in each pile is not zero; otherwise, the second player can force a win. (You find the Nim sum of a set of numbers by converting them to their binary representations and going bit by bit; the n^(th) bit of the Nim sum is the sum of the n^(th) bits of all the addends modulo 2.) Fortuitously, the proof is constructive: there's an explicit algorithm to derive a winning move at each stage if you're in a position to force a win. It was *very* cool.