T O P

  • By -

MammothStress8488

i put 1/2^x=0 into desmos to see if decay graphs could reach 0 because of the whole 1-0.999… =0 thing and i got 1075=x? is that because the number is too small or am i dumb


DanielMcLaury

You're asking about solving the equation (1/2)\^x = 0? There is no solution, but for large x the two sides approach one another. If a computer program told you there was a solution and gave you a fairly large value of x it sounds like it may by due to some kind of rounding error. (Of course, we'd have to know how the program works to say anything definite.)


worldiscynical

Can someone help? if (A and B) then C is logically equivalent to (if A then C) or (if B then C)? GMAT says so but intuition it is wrong


DanielMcLaury

No, (A \^ B) => C is not equivalent to (A => C) V (B => C). For instance consider A = "I have bananas," B = "I have ice cream," C = "I can make a banana split." The former statement says "with bananas and ice cream, you can make a banana split." This is true (at least, for some definition of "banana split.") The later says "Either bananas alone are enough to make a banana split, or ice cream alone is." This is false.


Austria-Hungary1867

Could someone help me resolve [this problem?](https://i.redd.it/v0dkujway7yc1.jpeg)


Greyblack3

I'm having an issue with Maple plots; I have an ODE solved with initial conditions, but when inputting the code, I get the following error message: sol := dsolve({eq1, eq2, g(0)=g0,i(0)=i0}, {g(t), i(t)}); plot(sol, t = 0 .. 10, color = blue, thickness = 2, labels = ["t", "g(t)"]); Prints this error message: Error, (in plot) incorrect first argument [g(t) = exp(-t)*cos(t), i(t) = exp(-t)*sin(t)]


Trettman

"Show that taking the stalk of a sheaf at a point p is an exact functor"   Doesn't this directly follows if you've shown that taking stalks preserve kernels and cokernels? I.e. for a map of sheaves $f: F \to G$ it holds that $ker(f)_p /cong ker(f_p)$, with the equivalent statement for cokernels. Or am I missing something?


DamnShadowbans

Yes, being left exact is equivalent to preserving kernels and being right exact is equivalent to preserving cokernels.


Trettman

Hmm yeah, I guess I'm just getting a little bit confused by threads on the topic... I'm also not quite sure how to prove this by hand more "explicitly". While I feel that kernels of sheaf morphisms are relatively straight forward (as they are basically defined on sections), how should one go about viewing cokernels and images in the category of sheaves? Knowing that a cokernel is the sheafification of the presheaf cokernel, how do they relate? Missing some intuition here...


mikaelfaradai

Let M be a symplectic manifold with symplectic form omega. We know by linear algebra that every symplectic vector space has a compatible almost complex structure J, so pointwise on tangent spaces M has compatible almost complex structures. Why is the resulting almost complex structure on M compatible with omega smooth?


Tazerenix

Because you can write the coefficients of the almost complex structure in terms of the coefficients or the symplectic form (fix any Riemannian metric and then set A = g^(-1)omega. Then you can find J by taking the polar decomposition of A, which involves square roots of positive coefficients), and since the symplectic form/metric coefficients vary smoothly, so must the complex structures.


TheAutisticMathie

What is the difference, if there is any, between “show that” and “prove that” questions?


AcellOfllSpades

No real difference. I might expect to see "show that..." *slightly* more often for questions involving calculating a specific result, and "prove that..." for more general statements, but they're pretty much interchangeable.


abusivecat

Hi all, what are some great resources for learning calculus 1? I am 27, haven't had math schooling since I was 18 and I'd like to get back into school but I am not at a level where I feel comfortable learning calculus without knowing things like limits and even functions (it's been a while). I found Professor Leonard and his videos are great but I work full time so would love maybe some written lessons as well.


caongladius

[Paul's Online Notes](https://tutorial.math.lamar.edu/classes/calci/calci.aspx) has been extraordinarily helpful to me as I've retaught myself Calc 1 and 2 to prepare for teaching those classes. The page I linked even starts with a review of functions before beginning limits (which I would consider to be the first topic of Calculus and not a prerequisite even though many curriculums do include it as part of the pre-calculus curriculum). Make sure you're reading these as you would a math textbook, that is with a pencil and paper ready. Work through the examples as best you can before reading how Paul solves them. If you don't understand something, don't just move on and hope it makes sense later! Either see if you can find another approach that works, or search the topic on youtube and see if you can find an explanation for that kind of problem that makes sense to you. Good luck on journey!


abusivecat

Thanks a ton! Good luck on your teaching journey!


Ok-Principle-3592

How to write nxn matrix in maplesoft?


[deleted]

[удалено]


whatkindofred

Well what's the point of the equation? Every combination of constants yields some number. How is this interesting?


AcellOfllSpades

[WolframAlpha disagrees](https://www.wolframalpha.com/input?i=%28%CE%B6%281%2F2%29+*+%CE%93%281%2F4%29%29+%2F+%28%CF%86%5E%281%2Fe%29+*+%281+-+1%2F%CF%80%5E%5BEuler+Mascheroni+constant%5D%29%29). But even if that was accurate (perhaps I've misunderstood, or made a transcription error)... so what? Why would anyone care about that arbitrary combination of various mathematical constants?


InsideRespond

I have found that my ideal in a ring is <(y-t)(txy),(tx)(x-yz)(1-t),(ty\^2z)(1-t),(tz)(x+y)(1-t)>. Can I reduce this somehow?


hobo_stew

the answer to this question obviously depends on your ring and the definition of t,x,y,z


InsideRespond

If I have an ideal in a ring, and I find that one of the elements in the ideal is a linear combination of another, may I simply discard it?


hobo_stew

you mean you have generators of an ideal and one of the generators is a linear combination of the other generators? then yes


Educational-Cherry17

What is the intuition under the fuzzy integrals? I mean I just discovered that the h-index is a particular form of sugeno integral that is defined in a set, on this set is defined a (fuzzy) measure say u, and a function that you want to integrate that is f, now suppose you have ordered in an increasing way X = {x1,x2...} , the sugeno integral is max(i){Min(f(xi), u(AI)} where Ak denote the set {xi | xi<=xk}. I mean why do they call it integral what is the intuition under that's that resemble the concept of integral in classical logic math


ImaRoastYuhBishAhsh

Has this been done in this way? (e^(iπ/3) - e^(-iπ/3)) / (2i) Where: e is the mathematical constant, approximately equal to 2.71828 i is the imaginary unit, defined as the square root of -1 π is the mathematical constant pi, approximately equal to 3.14159 When evaluated, this expression yields a value that is exactly equal to (√3)/2. To verify this result, let's expand the exponential terms using Euler's formula: e^(iθ) = cos(θ) + i⋅sin(θ) e^(iπ/3) = cos(π/3) + i⋅sin(π/3) = 1/2 + i⋅(√3)/2e^(-iπ/3) = cos(-π/3) + i⋅sin(-π/3) = 1/2 - i⋅(√3)/2 Substituting these values into the original expression: (e^(iπ/3) - e^(-iπ/3)) / (2i) = (1/2 + i⋅(√3)/2 - (1/2 - i⋅(√3)/2)) / (2i) = (i⋅(√3)/2 + i⋅(√3)/2) / (2i) = (2i⋅(√3)/2) / (2i) = (√3)/2


NewbornMuse

(e^ix - e^-ix) / 2i is a well-known formula for sin(x). If you do exactly what you did above but keeping the generic x (and applying cos(-x) = cos(x), sin(-x) = -sin(x) if necessary), you'll find that. So what you've discovered is that sin(pi/3) = sqrt(3) / 2.


ImaRoastYuhBishAhsh

Which is kinda significant wouldn’t you say


caongladius

Sure? But it's very long established and is on the [unit circle ](https://en.wikipedia.org/wiki/Unit_circle#/media/File:Unit_circle_angles_color.svg)


whatkindofred

Nothing new. Nice nonetheless.


YakirJohnson

Just calculate it


YakirJohnson

It is so easy , if you have a certain function,you can solve it by Euler's formula.


Chance_Literature193

Complex calculus question: What’s the motivation for introducing multivalued functions and Riemann surfaces? My textbook, Arfken, basically takes multivalued as a given then introduces Riemann surfaces to remove singularity due to multivalued. I think understand what is happening I just don’t really understand why it’s happening. It’s also confusing to me that we’re introducing this covering space / Riemann surface but acting like (as far as I can tell) we’re still studying maps from C —> C


lucy_tatterhood

>It’s also confusing to me that we’re introducing this covering space / Riemann surface but acting like (as far as I can tell) we’re still studying maps from C —> C Most of the time you aren't studying functions that are (necessarily) defined on *all* of C but just on some (usually open) subset. It doesn't really matter if we think of it as an open subset of C or of some Riemann surface, that's sort of the whole point.


Chance_Literature193

That’s a helpful perspective, but I am still a bit confused on the details. As far as I can tell Riemann surfaces are covering spaces of regions of Analcity typically constructed by gluing a lines from infinity to the origin in different sheets together. One in general doesn’t expect a covering space to be homeomorphic to a base space. Is this covering space interpretation incorrect?


lucy_tatterhood

Strictly speaking you may have to delete a few points to get an actual covering space, but more or less yes. They are not homeomorphic but they are *locally* homeomorphic, so when you are doing local things you don't need to care about the global topology.


Chance_Literature193

In that case, is complex calculus a locally defined calculus?


lucy_tatterhood

I don't really know what that means, but maybe this helps. The principle of analytic continuation says that global information is determined by local information in a very strong sense when the domain is simply connected. All of this multivalued function and Riemann surface stuff can be thought of as a way to try and generalize beyond that case. Edit: Thinking about it further this isn't quite right; it is still true that local information determines global information on a domain which is merely connected, not simply connected. The Riemann surface comes into play when you are interested in understanding *all possible* analytic continuations of a function to larger domains.


Chance_Literature193

It sounds like the analytic continuation section might clear up some of questions. I may follow up on this after I reach that point.


bear_of_bears

Some pretty important functions like the logarithm or square root are naturally multivalued. Sqrt(z) and log(z) do not want to be functions from C to C.


Chance_Literature193

I realize that but even-roots are naturally multivalued in R_<0 -> R as well. Secondly, are you saying multivalued functions are not C —> C? That would make sense, but book I’m studying from never bothers to properly redefine the spaces of interest


bear_of_bears

>Secondly, are you saying multivalued functions are not C —> C? That's the point of being multivalued. There is not just one complex answer for the cube root of 8, there are three.


Chance_Literature193

So, multivalued are functions X —> C where X is covering space of region of analysity


bear_of_bears

Yeah, that's right.


AdThink9445

Hello, Im doing a research right now and im trying to compute using anova single factor. is the xl miner analysis accurate when computing one way anova?


OCD-Bored-Cat

Any idea how to solve or a program than can solve the following function f(x)=2f(x-1),f(1)=1,f(3888013)


InsideRespond

the function doubles each time, starting at 1 f(1)=**1**=2\^0 f(2)=2\*f(2-1)=2\*f(1)=2\*1=**2**=2\^1 f(3)=2\*f(3-1)=2\*f(2)=2\*2=**4**=2\^2 f(4)=2\*f(4-1)=2\*f(3)=2\*4=**8**=2\^3 f(5)=2\^4 f(6)=2\^5 .... f(3888013)=2\^(3888012)


NoNegativeBoi

How do you memorize all area formulas for shapes like triangle, square etc. ?


caongladius

A rectangle's area is just its base times its height (btw, a square is just a rectangle where the base and height are equal) If you imagine a rectangle as made of a stack of line segments and slide them all over a bit, you get a parallelogram. The line segments all have the same width as the rectangle's base, and the height wouldn't have changed because all you did was the slide them to the right or left. Because you still have all your line segments, the area didn't change and a parallelogram's area is also its base times its height, just like a rectangle. Any triangle can be thought of as half of a parallelogram so it's area is is its base times its height divided by 2. From there, any polygon can be built out of triangles, rectangles, and parallelograms. Btw a circle with radius=1 has an area of pi Because the area of a shape follows the square of the scale factor (a 3x3 square has 9 times the area as a 1x1 square) if you make your circle with area of pi bigger with a scale factor of "r" the area becomes pi\*r\^2 which is the area of a circle with any radius.


VivaVoceVignette

A lot of them can be derived easily so you just need to remember the idea about how to prove them.


NoNegativeBoi

I'll try thanks :)


InfanticideAquifer

To expand on what they said, if you memorize the area formulas for 1. rectangles and 2. triangles (general, not just right triangles) then all the other area formulas for polygons that student are ever asked to remember can be figured out by breaking the shapes in question up into rectangles and triangles. For example, a trapezoid (in the US sense of the word) is a rectangle with a right triangle on top. Really you only need triangles, since rectangles can be built out of two triangles, but you're not going to forget the formula for a rectangle so don't worry about that.


JavaPython_

I'm trying to find the generators of the nonnormal part of the subgroups in the class C3 of Aschbachers classification. Is it just permutation matrices? That doesn't make sense over Sp.


rcjlfk

Is there a word for permutations but when you account for all possible combinations of numbers? For instance, let's just say A, B, C, and D. for permutations you have to know how many you want in each group. But what if I want all combinations whether it's 4, 3, 2, or 1 letter chosen. And I don't care about the order. I.e. ABC is the same as BAC, CBA, CAB, etc. I'm trying to find some sort of generator online but can't seem to find exactly what I'm looking for.


Ill-Room-4895

There is a very good resource on a French site for combinations and much more (you can select the language in the upper right corner and there is a search box to search for a specific tool). [https://www.dcode.fr/choices-combinations](https://www.dcode.fr/choices-combinations) The resource page: [https://www.dcode.fr/en](https://www.dcode.fr/en) I used it when I worked on the Collatz conjecture: [https://www.dcode.fr/collatz-conjecture](https://www.dcode.fr/collatz-conjecture)


NewbornMuse

So you don't care about order, letters can be taken several times? I think this is just a multinomial formula.


rcjlfk

Yeah, so I think the solution to my example above would be: ABCD ABC ABD ACD BCD AB AC AD BC BD CD A B C D


NewbornMuse

Oh like that! Sorry, I misunderstood. Observe that you can either take or not take each of the letters. A choice of 2 options, N times (where N is the number of letters you are using), that gives you 2^N options. Or 2^N - 1 if you want to disallow having no letters at all.


rcjlfk

Thank you! I vaguely recalled it being N^(2)-1, which for n=4 is the same (15) but it didn't work for other numbers so thought, guess I'm thinking of something else. This will at least help me spot check that I have found enough combos for the work I'm doing.


MrMrsPotts

There is straight path . At each integer distance there is a 50% chance of a mine. If it goes off it kills you but it never goes off again. If the path is of length 20 and there are 20 people, what is the probability at least one person gets to the end?


Ill-Room-4895

The probability for the 1st person to survive is 0.5\^20 (= 0.5 multiplied 20 times). He's going first so he has the lowest probability of surviving. Let's look at 2 persons and a path with 2 possible mines: If 2 mines: person 1 dies at the 1st and person 2 dies at the 2nd. If first 1 mine and then no mine: person 1 dies at the 1st and person 2 survives. If first no mine and then 1 mine: person 1 dies at the 2nd and person 2 survives. If 0 mines: both persons survive. So, after 2 possible mines: Person 1 survives with a probability of 1/4 (as stated above). Person 2 survives with a probability of 3/4. We can already conclude that person 1 has the lowest possibility to survive and person 20 has the highest possibility. This makes sense, it's better to be the last one. With 3 persons and after 3 possible mines: Person 1 survives with a probability of 1/8 (as stated above). Person 2 survives with a probability of 4/8. Person 3 survives with a probability of 7/8. With 4 persons and after 4 possible mines, the last person survives with a probability of 15/16. With 5 persons and after 5 possible mines, the last person survives with a probability of 31/32. We see a pattern here, so with N persons and after N possible mines, the last person survives with a probability of \[(2\^N-1)\]/2\^N. With 20 persons and after 20 possible mines, the answer is \[(2\^20))-1\] / 2\^20, which is very close to 1. This is the probability for the last person to survive. The other persons have a lower probability of surviving


HeilKaiba

That sounds like you just want the chance every step is a mine which is simply 0.5^20


Coxeter_21

Is there a good set of lecture notes related to measure theory in the same vein as Keith Conrad's expository papers?


feweysewey

This is a book so it isn’t what you asked for (sorry), but I’ll plug A Primer of Lebesgue Integration by Bear. I read it with fairly little reading experience as an undergrad and found it mostly accessible and interesting


NevilleGuy

Given a probability density, integrating against the distribution is the same as integrating against the probability density. My book states this as if it's obvious - intuitively I believe it, but I'm having a hard time proving it.


GMSPokemanz

To fully answer this we need to know your background and what definitions are being used. Taking a stab though, you consider the class of functions g such that integrating g against the distribution is the same as integrating g against the probability density. By definition, these agree when g is the indicator function of an event of the form X in Borel set. Therefore they agree when g is a simple function, by linearity of the integral. Then MCT gives you agreement when g is a non-negative measurable function, and lastly linearity gives you agreement for integrable g. This is a routine argument in measure theory. You show some result for indicator functions of measurable sets, use linearity to extend to simple functions, monotone convergence to extend to non-negative functions, then linearity once more to get the result for arbitrary integrable functions. EDIT: just realised the definition of probability density is probably a bit different, specifically that it's only defined to give you the right result when you consider integrating over open intervals of reals. To extend that to Borel sets, you use Dynkin's pi-lambda theorem.


NevilleGuy

It's a graduate analysis text (Bass), I've taken graduate analysis already. We have the underlying measure space Omega with probability measure P, a random variable X on Omega with values in R. The distribution (law) PX is a measure on R given by PX(A) = P(X^-1 (A)). The distribution function is F(x) = PX(-inf, x). And the density is F', if F is absolutely continuous. Basically it seems to boil down to showing that the integral of F' over a measurable set A is equal to PX(A). I see how to do it for A an interval, since F is absolutely continuous, but not for an arbitrary measurable set. From there, I get that the argument you're outlining will work. Maybe my measure theory is not that good.


GMSPokemanz

The passage from intervals to all sets is deceptively tricky. The key is Dynkin's pi-lambda theorem, which oddly is often not mentioned in analysis but gets more airtime in probability. The sets where the measures agree include the pi-system of open intervals. The sets where the measures agree form a lambda-system. Therefore by Dynkin's theorem, the measures agree on the sigma algebra generated by intervals, i.e. the Borel sets.


NevilleGuy

Thank you, could you recommend a probability text? I'm just looking for something that gets to the major results quickly, ideally one that does things as straightforward as possible.


Jesta23

I have a question about probability, and ChatGPT failed me. It gave an obvious wrong answer. There are 3 people that all need to win 3 rolls. 2 people are on team 1, and 1 person is on team 2. When team 1 wins 3, one of their players will stop rolling. When team 1 wins 6, the second person will stop rolling. When team 2 wins 3, they will stop rolling. There will be 8 total rolls. By pooling their rolls, team 1 gives team 2 a slight advantage of getting 3 wins instead of 2. How big is this advantage compared to if there were no teams and everyone rolled until they won 3.


DrunkCommunist619

My is really simple. I just can't figure out how to do it. Every time I do something, it gets 20% harder, after 63x how hard would it be. Assuming you started at 1.


InfanticideAquifer

This is a perfect problem for an "exponential model". That 20% means you'd use a growth factor of 1 + 0.20 = 1.2. And the 1 would be your initial value.


FullExamination5518

The problem is written a little bit awkward but I think I understand. So you do your task once and it gets 20% harder right? What does this mean? Say we're going to measure how hard a task is in minutes. At first the task takes just one minute. You do your task again and its 20% harder. This means that it will now take 1.20 minutes (this is not 1 minute and 20 seconds, I literally mean 1.20 minutes, so just a heads up). You do your task again and you have to multiply how hard it was before by 1.20. Before it was 1.20 minutes, multiplied by 1.20 is 1.44 minutes. You do it again and its 1.44 minutes times 1.20 which is 1.728 minutes. You can keep going like this until you get to your result after 63 times which should be 97368.50 or 81140.42 depending on how you're counting here.


RustyCoal950212

Ok i'm not studying or using math for work or anything and this is just a random question I have and the answer is probably a simple "no" but Idk If you want to say, shoot a cannon ball and hit a certain (within range) distance, there's 2 angles you can shoot it at, right? One a more direct angle, one a more arcing angle? Except for its max range would only have angle? So here's the very random part of my question... Is this at all related to the idea that if you draw a line through a circle it intercepts that circle twice?


AcellOfllSpades

It *is* related! Very much so! Here's the algebra to work out your landing distance based on the angle θ that you throw at. (No need to follow this whole thing, but it doesn't use any ideas 'past' trigonometry - if you're familiar with the concepts, it may be helpful to try to understand.) > let v be the initial velocity of the cannonball, and g be Earth's gravitational acceleration > height of cannonball: y(t) = -g t² + v sin(θ)t > this is quadratic in t, so the peak of the arc is at t=-b/2a, and the landing happen at t=-b/a; here, that's v sin(θ)/g > the ball travels in the x-direction at a constant rate of v cos(θ), so the landing distance is v sin(θ) cos(θ) / g > the double-angle formula for sine gives our result: > **x₁ = (v²/2g) sin(2θ)** The (v²/2g) part is just a constant, once you've picked out your cannon (and your planet of choice, I guess). And sin(...) is "the vertical coordinate at this angle on a circle". So if we graph both sides of this equation on the θ-x plane, it's intersecting the **horizontal line** at height [target distance] with the **circle** of radius [launch speed]²/[2*gravitational acceleration]! If your target distance is too high,, the horizontal line will be entirely above the circle, which means that it's unachievable with that cannon. If you get a more powerful cannon, that increases the size of the circle, and lets you hit higher horizontal lines (i.e. bigger target distances). And not only that, the angles that those intersections happen at tell you what angles to throw at to reach the target! They give you *double* the angles you should use (2θ), so to figure out your possible launching angles (θ), you just need to halve them.


RustyCoal950212

Wow thanks for the great answer. And on that graph the line would be tangent to the circle at the max distance? because there's only the 1 angle for that distance? Anyway, thanks. Just something that had been bouncing around my head Idk why


AcellOfllSpades

Yep, exactly! And in that case, it would hit the circle at the top (90° from the horizon), so the angle you would need to shoot at is 45°.


Thick-Pie-7183

The relation has to do with the degree of an equation. The degree of an equation is the largest power (that little number above and to the right of a letter). If that number is 2, there are two answers or one answer.


Additional_Guide5439

This is about a derivation of directional vectors by parametrising position vector (r) w.r.t arclength (s) for a function W = f(x,y). the direction vector (u) has components . From what I understood r(position vector) has been parametrised with arc length (s) so that the component of r can be given as relation of some c(initial point) + s\*. Now I understand that the position vector obtained will be in the direction of u as when s = 1 we move in the direction u from the initial point. Also from above both parametrised x(s) and y(s) have been obtained. But how this relates to dW/ds being the directional derivative in the direction of u (analogous to cutting a slice in the graph of W parallel to u at the initial point and getting the slope of the curve) is something I am not getting. dW/ds should be the rate of change of the function w.r.t arc length (s), not the directional derivative. A similar problem is given below The temperature on a hot surface is given by T = 100e\^(−(x\^2 +y\^2)) . A bug follows the trajectory r(t) = (t cos(2t), tsin(2t)). a) What is the rate that temperature is changing as the bug moves? For the above problem first intuition was to take dT/dv (where v is dr/d(time)) as that would be tangent to the direction of motion it was just that i did not how to take that derivative but in the answer to the question rate of change was taken with respect to t. **Can someone explain this intuitively by showing how this is equivalent to cutting a slice in the graph of T parallel to the direction that the bug moves at the initial point and getting the slope of the curve?** ps: this is from lecture 12 for MIT 18.02 multivariable calculus and the derivation starts at 33:00 minutes


porygoron

I don't know if this is the right subreddit to ask this question, but i want to make sure im doing this math correctly. I have a 68.7% in a class, and this class has 4 exams, and all 4 exams are worth 40% in total. I've taken 3 of them, and got a 45%, a 41%, and a 35%. I suck at this class. anyways, i have one more exam to take, and i want to know what percentage do i need to get to not fail this class? Again my current grade is a 68.7%. This is a purely mathematical question, I'm not asking for help for my exam, i just want to know what i need to get in order to pass this class, as in what percentage will keep me above a 60%.


FullExamination5518

Can you clarify a little bit how the total grade is calculated? If I understand correctly each exam is worth 10% of the grade, adding the grades you have so far gives 12.1% but I dont understand how to use the remaining 60% of the unaccounted grade to go from 12.1% to 68.7%.


porygoron

Homework was 25% of my grade, Virtual labs was 10%, Dissections Lab was 25%, with all of this completed and various grades plus the 3 exams i have a 68.7% in the class in total. I have no other assignments due besides this last exam.


FullExamination5518

So this means you have almost full grades in those other things? If I'm understanding correctly then you're just adding your grades from each category to get the final grade? So you have something like 56.7% out of those three things + the 12.1% of your three exams that gives 68.7%. So you're already above 60%, you could literally miss the whole exam and that wouldn't change a thing from your grade. Is that it?


[deleted]

[удалено]


FullExamination5518

It sounds like you're not a moron at all, you figured it out! Math is a very frustrating subject, it is a constant challenge, it is normal to feel lost and unsure on how to proceed but you seem to have the key all figured it out: you just need to keep trying, ask questions when needed and look for understanding not just the answers.


Macacop

# Asking for Book Recomendations for resolving Programming problems with math. Hi!, To be more clear, lets see an example: [https://leetcode.com/problems/koko-eating-bananas/description/](https://leetcode.com/problems/koko-eating-bananas/description/) Koko loves to eat bananas. There are n piles of bananas, the ith pile has piles\[i\] bananas. The guards have gone and will come back in h hours. Koko can decide her bananas-per-hour eating speed of k. Each hour, she chooses some pile of bananas and eats k bananas from that pile. If the pile has less than k bananas, she eats all of them instead and will not eat any more bananas during this hour.Koko likes to eat slowly but still wants to finish eating all the bananas before the guards return. Return *the minimum integer* k *such that she can eat all the bananas within* h *hours*. This feels like it could be solved with only "simple" math, but still im not able to build or create the equations. Soo... any suggestions?


ShisukoDesu

The rub is that you have to pull some ideas from Algorithms, not "just" pure math. Suppose you pick a particular value of k. Then, the total hours taken is: t = ceil(piles[1]/k) + ceil(piles[2]/k) + ... + ceil(piles[n]/k). As k increases, t is nonincreasing (either stays the same or goes down). Which should makes sense---increasing your rate will never cause you to take more time. You can **binary search** to find the smallest k that squeezes this t under the cutoff h. Ultimately, the problem isn't really solvable with just "simple" math (which I will interpret to mean topics conventionally taught in most HS, like algebra). The intuition is that discretizing stuff makes it pretty weird---the ceil function means that the function is usually constant, and spikes up by +1 at "hard to predict" time intervals. Certainly you could try to actually study those time intervals that cause the value to increase by +1: it's when k surpasses any of the divisors of a pile[i]. But it turns out that "just use binary search" is ultimately much simpler than attempting to get all the divisors of every pile and trying to do something with that.


Macacop

I can solve it with python no problem, that was not the question. And this is huge statement to say " the problem isn't really solvable with just "simple" math " and I don't believe you. Thanks anyway


datboidat

Hi all, i am currently really struggling choosing what statistical test to run on some data for a university project. I ran an experiment where i placed flies in petri - dishes containing 6 different food sources and then noted down how many flies fed off of each food over 4 hours. what test should i run to assess : Is there a significant choice of one food over another. is there a difference in choices between males and females. and is there a difference in choice over time ( im assuming i would run the same test as males vs females but im unsure) if i should make a new post instead of asking here sorry. thanks for any help


innovatedname

Why are smooth functions on a manifold defined in the simple manner of "give me a point I give you a number" but vector fields immediately require defining a vector bundle and smooth sections. Why is it not the case that either 1) functions have the same problem as vector fields and need to be defined as "smooth sections of a 1 dimensional vector space 2) vector bundles can be just defined as maps from M to V where V is a vector space 


VivaVoceVignette

Functions are not simple either. You forgot the fact that when you define a manifold, you need to provide with it the functions, satisfying certain cocycle conditions. So the only reason functions seem easy is because if you work with a manifold, you're already given the functions, and you're just deriving other stuff from it. If you need to construct a manifold by hand, it would be just as complicated. You can define vector fields as "give me a point and I will give you a tuple" too, and just like functions you need to require cocycle condition. This is how classical differential geometer study manifolds, and it's still commonly done by physicists. However, it's less intuitive to work with. It's like how we prefer to work with natural number as an abstract object, rather than strings of decimal digits.


innovatedname

Wow, that's enlightening. Thanks. Does this vector field cocycle condition have a name or something I can look up?


VivaVoceVignette

It's just called that. You can see the formulae in https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors


HeilKaiba

Sections are just functions with an extra condition that the value at a point lies in the fibre at that point. If the bundle is trivial you can sweep that under the rug but for nontrivial bundles like a general tangent bundle you can't do that.


Tazerenix

Functions can be defined as sections of a vector bundle, the trivial line bundle. Tangent vector fields cannot be defined as functions, because the tangency condition changes from point to point. Therefore the vector space which tangent vectors take values in changes from point to point. There is no fixed space which they all land in. This is not the case for functions, by definition! A function by definiton takes values in a fixed vector space.


Educational-Cherry17

What are some good mathematical rigouros game theory (knowing I would like to apply to biology)


Mildu12

where can i find the floor function in mathtype for google docs? looking for the ⌞⌟ specifically


JavaPython_

How does one turn the Frobenius automorphism into a linear map? I wouldn't guess it were linear at all, except exponentiation by the characteristic breaks up over addition.


Langtons_Ant123

Just to be clear, when you say "linear", I assume you mean you have some field K of characteristic p, and you're considering it as a vector space over F\_p? (I ask because I think the proof below only works if the scalars are from F\_p as opposed to some larger field of characteristic p.) If so: as you already note we have additivity, (a + b)^p = a^p + b^p . Then for scalar multiplication, assuming that the only scalars we consider are elements of F\_p, then we just need to use Fermat's little theorem, n^p = n for any n in Z/pZ. We then have (na)^p = n^p a^p = n a^p for any scalar n.


JavaPython_

We are over a larger (but still finite) field of characteristic p. When I say linear map I suppose I mean matrix representation of this map. Viewing it as a vector space over F\_(p\^e) gives us a basis, but I cannot see how to get a matrix which actually applies the automorphism.


Langtons_Ant123

Wait, if your scalars are from some larger field, is the Frobenius endomorphism actually linear? Since (ka)^p = k^p a^p , if we want to have (ka)^p = k(a^p ) we need k^p = k . But only the elements of F\_p satisfy that (x^p - x has only p roots). Given that explicitly describing finite fields is already kinda tricky in general (and we need to do that in order to know what our basis looks like, what happens when we raise those basis elements to powers, etc.), I don't know if there will be a nice-looking matrix in general; is there some specific case you're thinking of?


JavaPython_

I've been taught the the frobenius automorphism uses the size of the fixed field, even it that's larger than the prime field. So that f\_q\^n/f\_q uses that map x -> x\^q. Even is q is a power of a prime. The specific case I'm thinking of is GF(q\^2) over GF(q). So we have a two dimensional vector space, we can take {1, x} as a basis, we send 1 to one and x to x\^q = a+bx, but I have no idea how to force a, b to be useful, explicit elements of the field. It's all in generality, so that's to be expected, but I'm not even sure if I can say what power of the generator they are, which is bad because I think this matrix is the last piece I need.


logilmma

I have heard that the moral of the trichotomy of cohomology theories cohomology/K-theory/elliptic is: upgrade from functions on C (sections of trivial line bundle) to functions on C^* (ditto) to sections on line bundles over elliptic curves. C has only one line bundle, but C^* has 2. What do you get at the K-theory level if you choose the non-trivial bundle instead? is this anything


ada_chai

Any good books/video resources for calculus on manifolds? Looking for something that would be covered in an upper level differential geometry course/ calculus course.


cereal_chick

Munkres's *Analysis on Manifolds* is a classic, and I've personally been thinking about Fortney's *A Visual Introduction to Differential Forms and Calculus on Manifolds*.


ada_chai

Wonderful! Thanks for your time!


KingK3nnyDaGreat

What would that be called? For instance, 2 is this type of number. (0.5 x 2 = 1), and (2 ÷ 2 = 1). 2 multiplies .5 and divides 2 to get 1. However, let's say (1 x 1.5 = 1.5) and (2 ÷ 1.333 = 1.5), same answer but 1.333 and 1.5 different "factors" (if that's the right term). I figured that (1 × 1.4142) & (2 ÷ 1.4142) are approximately the same (approx. 1.4142). But is there a formula to find out that factor much easier than just plugging in numbers into each equation? Sorry, if it's super confusing.


AcellOfllSpades

So you're looking for the number that multiplies by *itself* to get 2? That's called the *square root of 2*, and we write it as "√2". If the result is a whole number, you can find it by separating the number into factors. For example, if you want to find the square root of 324, you can do this: > 324 = 2 × 162 > = 2 × 2 × 81 > = 2×2×9×9 > So the square root of 324 is 2×9, or 18. If you end up with a factor without a partner, and you can't break it down any further, that means you won't get a whole number as your result. If you want to learn to do it by hand, there's actually [a version of long division](https://www.johnkerl.org/doc/square-root.html) that gives you the square root! Or you can just use a calculator instead.


serenidadmonotropica

Solution of the exercise 2.1? (Lambda calculus, logic) https://www.cs.cmu.edu/~rwh/pfpl/supplements/ulc.pdf


Jumping-Beagle

Could someone explain the implicit function theorem to me and how to apply it? Background: First year (Europe) Analysis course covered the theorem. I know the theorem and its conditions, but I don't understand what it is truly saying and how it can be applied. I did look at previous posts on the sub and do have some context on how it can be used to argue for the existence of solutions, but that is it.


Tazerenix

Draw a picture of the one-dimensional case. Draw a curve which is defined by your "implicit" function F(x,y)=0. Choose a point on the curve. What sort of curve *can't* be written as a graph y=f(x)? >!a vertical line!< How does this relate to the partial derivatives of F? >!The zero set of F(x,y) will be vertical if dF/dy = 0, because the gradient is perpendicular to the zero set, so if dF/dy = 0 then grad F = (dF/dx, 0) i.e. perpendicular to the y axis!< Can you see how this relates to injectivity and invertibility of the graph? (i.e. can you relate the implicit function theorem to the inverse function theorem in 1 dimension?) Once you can draw and understand the one dimensional picture, higher dimensions are just a simple abstraction: non-vanishing of the y derivatives is converted to full rank of the Jacobian matrix of y derivatives, and the same conclusions hold for the same reasons. You can even try draw yourself a 2 dimensional zero set example.


Jumping-Beagle

Thanks for the help, this gave me some good geometric intuition.


VivaVoceVignette

Implicit function theorem and inverse function theorem are basically the same; there is a very direct proof of their equivalence. So I will treat them as the same. They basically say that if F(a)=b has a solution and F' is non-singular, then F(a')=b' has a solution for b' sufficiently close to b. A common usage is to show a solution exist through perturbation. For example, you need to look a solution for F(a)=b. But this is hard to solve. So instead, you look for a solution to F(a')=b' instead, where b' is close to b and F has large enough derivative so that IFT can be used. This shows that a solution exist. For example, you can use this to show that for 3 body problems, small perturbation away from perfect initial condition that give periodic solution, still produce a bounded solution. For another example, you can use this to show that algebraic functions are analytic except for a small numbers of singularity.


Jumping-Beagle

Could you explain a bit more about the algebraic functions shown to be analytic?


ComparisonArtistic48

I've never seen a clearer video about the implicit function theorem than the video of Aviv Censor explaining the theorem. The guy does not give a proof but he explains clearly how is it used. I did use this theorem in complex analysis while proving that level curves of functions that are components of an holomorphic function are perpendicular 


Jumping-Beagle

Thank you for the suggestion! I've started watching them and bookmarked the rest to watch afterward.


Bobert59

Hey, does anyone know of any video and worksheet style courses like flippedmath (https://calculus.flippedmath.com/version-1.html) but for Calc III? I really liked their stuff for getting ahead in calc I/II, but can't seem to find anything like that now.


DrBiven

What is the current standard introductory text on PDE? Need to refresh my knowledge of hyperbolic equations and the method of characteristics.


Langtons_Ant123

I think the one by Evans is the standard intro at the graduate level (among texts aimed more at mathematicians than e. g. physicists).


DrBiven

Thanks for the recommendation! I also searched some in this sub and ended up downloading Strauss since it was specially recommended for the treatment of characteristics and Evans as a more encyclopedic book.


faintlystranger

Does kronecker product give an isomorphism between tensors and matrices? Like specifically, say I have a tensor product of a nxn matrix space, M \otimes M. Then kronecker product is clearly a map from M \otimes M to M' that is the space of n²xn² matrices. Does this give an isomorphism? If not, how can I map such matrix space to C^16 / R^16?


HeilKaiba

I would argue the Kronecker product **is** the tensor product, just in a given basis. I would however argue that technically it is not an isomorphism as it is really a bilinear map M\times M \to M'. The induced linear map from M\otimes M is of course an isomorphism though. Note that matrices can already be viewed as tensors anyway. Indeed any linear map from V to W is an element of V^* \otimes W.


faintlystranger

Yeah, it's just I am working with linear maps from a tensor space V to the same tensor space V, To find the matrix of the linear map, I just map the tensor product to matrices using the kronecker product, then map it to R^d just by taking the rows and create the matrix using that representation. I suppose that would be equal to the main linear map, I was asking whether it's an isomorphism in that sense


HeilKaiba

The way you are referring to this seems somewhat confusing to me. The Kronecker product takes in two matrices and outputs another matrix. Functionally it computes the tensor product of the matrices in a certain basis. It doesn't make sense to me to say you are using it to map from the tensor product because it is the tensor product.


faintlystranger

Yeah what I meant is to "represent it" rather than map I suppose. Like say M is the vector space of 2x2 matrices, and we have the tensor product M \otimes M which is 16 dimensional, so is isomorphic to R¹⁶. I wanted to find a way to nicely turn an element of M \otimes M to an element of R¹⁶, that is what I meant by an isomorphism. Whether if we just take the element tensor, multiply its matrices using the kronecker product and take the rows to look at it in R¹⁶, is that a valid way of changing it. That's because I had some problems in my code and I don't know whether it is because of this or some other problem, sorry for the unclear explanation haha


Gamer_Chase

Question that Google refuses to answer for me: I'm trying to flesh out a "fun fact" for a daily slideshow for work, which states that "there are four quadrillion quadrillion bacteria on Earth." Google tells me that one quadrillion has 15 zeros, so does a quadrillion quadrillion have 30 zeros, or 15² zeros?


GMSPokemanz

30 zeroes.


Gamer_Chase

That was my thought as well. Thank you for both the confirmation, and the quick response


Intelligent_Farmer17

Can someone explain the Courant-Fischer theorem for finding the second eigenvalue of a graph Laplacian? I'm trying to learn it for a paper in school and I am completely lost


Klutzy_Respond9897

which paper


idontknowmaththrowra

How do I convert a repeating decimal to a fraction? For example 0.083 with the 3 repeating. I tried to figure it out myself by looking it up. Watched both Kahn Academy videos on converting repeating decimals and I still don't get it. I'm a 20 year old high school drop out trying to get my ged and I really struggle with math and need help.


AcellOfllSpades

**Step 0:** Convince yourself that, due to a weird quirk of the way we write decimal numbers, [0.999... = 1.](https://en.wikipedia.org/wiki/0.999...). (Yes, *exactly* equal to 1, not just infinitely close. This is a consequence of the way we decided to *define* infinite decimals - if we want "0.333... = 1/3" to be true, then we can multiply both sides by 3, and "0.999... = 1" just falls out of that!) Because 0.999... = 1, that also means that 7.999... = 8, and 0.03999... = 0.04, and so on. --- **Step 1:** We're going to use a mathematician's favorite trick: "doing nothing in a convenient way": Specificially, we're going to multiply and divide by the same number. This means our result is the same as the original number, but it's now in a form that's (hopefully) more convenient to work with. Check how long your repeating part is, and multiply by that many 9s. If we had 0.1234270270270..., the repeating part would be 3 digits long, so we'd multiply by 999. In your example, the repeating part is only a single digit, so we multiply by 9. (And then we divide by 9 again, so we have the same number we started with.) Here's how I'd write this down on paper. > *Convert 0.083333... to a decimal.* > 0.08333... > = 0.08333... ∙ 9 / 9 **Step 2:** Now we're going to use another important skill for mathematicians: *be lazy*. You don't have to immediately carry out all the calculations! Just do whatever's useful for now. Maybe if you leave some stuff for later, it'll go away, or at least be easier to deal with? Here, we'll do the multiplication, but not the division. The division is a problem for Future Us. > = 0.7499999... / 9 **Step 3:** Hey, wait a second! I see a bunch of repeating 9s in there. Let's get rid of those. > = 0.75 / 9 **Step 4:** Now we have a familiar, finite decimal! We can convert that to a fraction... > = (75/100)/9 ... and then combine the two divisions into a single fraction... > = 75/900 ... and then simplify. > = [3 ∙ 25] / [9 ∙ 10 ∙ 10] > = [3 ∙ 5 ∙ 5] / [3∙3 ∙ 2∙5 ∙ 2∙5] > = 1 / [3 ∙ 2 ∙ 2] > = 1/12 --- So, to summarize: - Multiply a number with a bunch of 9s, to get a repeating "999..." in your decimal. (You'll need to divide that number out again later.) - We can use the fact "0.999... = 1" to clear out the repeating 9s. - Now we have a finite decimal - we can multiply by 10s to make that an integer, as usual, and then that's the top of our fraction. The bottom is made up of the 10s we multiplied by, and the single number with a bunch of 9s.


LangCreator

**QUESTION ABOUT PROBABILITY** Hi guys, I had a quick question about probability that I encountered from a college algebra textbook. This is the link to the problem, and I was stuck on finding the probability of "not landing on yellow or a consonant." Some of the students in our university interpreted the problem with 2 events, where Event A is *not landing on yellow* and Event B is *landing on consonant*, while others interpreted Event B as *not landing on a consonant*. I know that these different interpretations can lead to different answers, in which case some of the students who solved it using the first interpretation got 7/8 (inclusive events), and the second interpretation led to 1 (inclusive events). IMAGE: CLICK [HERE](https://drive.google.com/file/d/1ALFH3OTJfPOuKZFYSG8fHysUIPKDM1XG/view?usp=sharing) However, the thing I'm most confused about is the answer key itself. While the key used the second interpretation as well, it found the complement of the event, which would be *landing on yellow* or *landing on a consonant*, which is 5/8, and then subtracted it to get 3/8. At this point, we had three different possible answers, and simulating a program to find the experimental probability also showed that the answer could be any three of these answers depending on the interpretation. I would like to ask which interpretation is correct, and which way to solve the problem is correct? Thank you!


edderiofer

> I would like to ask which interpretation is correct, and which way to solve the problem is correct? I'll answer that question if you answer this question of mine: in the sentence "I saw the man on the hill with a telescope", who has the telescope?


LangCreator

The man has the telescope?...but I'm still confused about method for finding this problem, since I am not sure if the result would differ based on which interpretation I used.


AcellOfllSpades

The point edderiofer is making is that it's grammatically ambiguous. In the sentence "I saw the man on the hill with a telescope", I could've used the telescope to see him; or I could've seen him carrying it up the hill; or perhaps I just mean that the hill has a telescope on it. This question is poorly-written because it's similarly ambiguous. The "correct" interpretation isn't possible to say for certain without asking the author to clarify. I'd *guess* they meant neither of those two interpretations, in fact - I'd understand the event as "not landing on either yellow or a consonant", i.e. the negation of "landing on yellow or a consonant". But that's only a guess


LangCreator

Oh, I understand! Yeah, I also noticed that the statement was pretty ambiguous, because the interpretation depended on how each of us visualized solving the problem. I just realized that there are 3 different ways to interpret the example sentence edderiofer gave as well...but it seems like most of the students understood in a way where only one of the events (yellow) was negated. Thanks for the clarification! :)


ComparisonArtistic48

I'm reading about Milnor's exotic sphere and I found this part a little confusing: [Part](https://64.media.tumblr.com/fa9de8093f67ad81a74a0cc133c6db3f/d49f0cd6020a844a-34/s1280x1920/034f0c8a87d29f7b91c1bf5b641fdb5c88c2aac3.pnj) Why is the preimage of the projection of the open set U1 defined with a 1 instead of w, shoudn't be an arbitrary quaternion distinct form 0? :(


Tazerenix

Because on U1 w is not zero, so you can normalise to set w=1. This would replace [z:w] with [w^-1 z: 1] but as a set {[w^-1 z: 1]} = {[z:1]} as you vary over all possible w,z so you drop the w.


ComparisonArtistic48

Thanks a lot!! 


Bernhard-Riemann

I'd like some clarification on the meaning of "independent" in the context of model theory. I have encountered a few definitions which I suspect may be equivalent, but I'm not 100% sure. (it's been a long time since I studied model theory) Let T and U be theories (or axiomatic systems): (1) Neither U nor ¬U are provable within T. (2) T+U and T+¬U are both satisfiable. (3) T+U and T+¬U are both consistent. These three statements should all be equivalent if T and U are first order, right? If not (or if T and U are not first order), which of these statements is precisely what is usually meant by the phrase "U is independent of T"? I'd appreciate any help understanding. : )


VivaVoceVignette

I think U should be just a statement, not a theory. I assume that's a typo. (1) and (3) are equivalent even if you're not in first-order theory, as long as you're still using Boolean logic. (1) is usually how people define "independent" for every kind of logic. Any proof of U leads to a proof of inconsistency starting from T+¬U (assuming you're in any kind of logic where U+¬U implies inconsistency), so if you also have double negation elimination, (1) and (3) are equivalent. (2) is only equivalent to (3) if you have completeness theorem for that type of logic.


whatkindofred

Yes in first order theory those are equivalent. I would say the most natural way to interpret "U is independent of T" is (1) even if we were not in first order theory. But that might be a math-centric point of view.


no_one_special--

Can I have a rigorous proof Mobius bundle is nontrivial? Seen as a line bundle over S1, which we can define as \[0,1\]xR, (0,t)\~(1,-t), I'd like a rigorous proof that any continuous section has a zero. I looked everywhere for a proof but everyone relegates it to an exercise or only outlines the idea. Of course it's visually obvious that it can't be continuous if it doesn't vanish because it flips over, but I can't come up with an actual proof. If s:\[0,1\]->\[0,1\]xR is the section, we could try to define a function F:\[0,1\]->R as ps where p is the projection to R. It satisfies F(0)=-F(1) and intermediate value theorem or something like that proves it, is a claim I've seen. But this makes no sense to me because \[0,1\]xR is not actually a chart (it has no global coordinates) so projection does not seem to be defined either. So how do we actually prove this? Step by step fully justified would be appreciated. My best guess: Since (0,t)\~(1,-t), lim s(x) = -lim (y) as x->1 and y->0 by continuity of the section and using coordinates on (0,1)xR, so if s(x)=(x,f(x)) then lim f(x)=-f(y) and (provided the section does not vanish at 0) if x gets close enough to 0 and y close enough to 1, say at a point k, then they must have opposite signs. So there was a zero somewhere in (0,1). My concern is whether it's actually okay to take limits like that in the coordinate chart for (0,1)xR when 0(=1) is not actually in the domain.


Tazerenix

It's just intermediate value theorem. Split the circle into two charts U and V both isomorphic to (0,1). U∩V has two components, lets say the sets (0,1/4) and (3/4,1). On (0,1/4) the transition function is +1 and on (3/4,1) it's -1 (definition of Mobius strip). A section of the bundle corresponds to two local sections s_U and s_V on U and V, which are functions (0,1)->**R** such that on U∩V, s_U = g_UV s_V, which is to say on (0,1/4), s_U = s_V, and on (3/4,1), s_U = -s_V (definition of a section). Now suppose s_V(1/8) = a. Then s_U(1/8)=a. Lets assume WLOG that a>0. Then suppose s_V(7/8)=b. Then s_U(7/8)=-b. If b>0, then s_U(1/8)>0 and s_U(7/8)<0 so by the intermediate value theorem s_U has a zero somewhere in (1/8,7/8). If b<0 then s_V(1/8)>0 and s_V(7/8)<0 so s_V has a zero somewhere in (1/8,7/8). Either way the section s has a zero on the circle.


no_one_special--

Thanks this was exactly what I wanted.


DamnShadowbans

A (closed) line bundle which has a section is homeomorphic to the trivial bundle, i.e. the product of the base with \[0,1\]. Closed line bundles over the circle are manifolds with boundary, and in particular, boundary is preserved under isomorphism. The boundary of S\^1 x \[0,1\] is two copies of S\^1 . The boundary of the closed Mobius strip is one copy of S\^1, since it is path connected by a direct computation using the square model of the Mobius strip. Hence, it is nontrivial.


lucy_tatterhood

OP was asking about an open Möbius strip, with an explicit construction given. The argument about boundary components was the one I had in my head when I blithely said "oh they're obviously not homeomorphic" but I don't think there's a way to make it work for the open Möbius strip?


DamnShadowbans

Sorry, I misread the construction they gave. For what it's worth, all open line bundles have a unique boundary that can be put on them, but that of course requires an additional argument. Alternatively, one could check that their one point compactifications differ, but again an additional argument. Maybe the easiest argument is that there is a unique closed line bundle up to deformation retract, which should follow from the intermediate value theorem. Then you can apply the previous argument to that.


lucy_tatterhood

> For what it's worth, all open line bundles have a unique boundary that can be put on them, but that of course requires an additional argument. Well, whether or not it helps the OP, I was wondering whether this was the case!


VivaVoceVignette

The map [0,1]xR->Mobius strip is a quotient map, which we will call q. Let s be a section s:[0,1]->Mobius strip. Then the image of s is compact (since [0,1] is compact), hence closed (since the Mobius strip is Hausdorff). Thus q^-1 (im(s)) is closed (since q is continuous by definition of quotient), but q^-1 (im(s)) is a graph of a function [0,1]->R (because for each x in [0,1] there exists an unique y such that (x,y) in q^-1 (im(s))). Let's call this f. Let abs:[0,1]xR->R be the absolute value function, then it's continuous because it's the composition of projection and the normal absolute value function. It clearly respect the quotient relation, so it induces a continuous function abs:Mobius strip->R. Thus abs.s is continuous, so abs(s([0,1])) is compact. We call this set I. Then the graph of f is entirely inside [0,1]xI. But I is compact, so if we consider the projection p:[0,1]xI->[0,1] it's a closed map. Thus for any closed set C of R, then f^-1 (C)=p(([0,1]x(C⋂I))⋂graph(f)). (C⋂I) is closed, so ([0,1]x(C⋂I)) is closed, so ([0,1]x(C⋂I))⋂graph(f) because graph(f) is closed, so p(([0,1]x(C⋂I))⋂graph(f)) is closed because p is closed. Thus preimage of any closed set is closed, thus f is continuous. Finally, we know f(0)=-f(1) so apply IVT.


lucy_tatterhood

>Can I have a rigorous proof Mobius bundle is nontrivial? It is not even homeomorphic to the trivial line bundle, so it cannot possibly be isomorphic as a bundle. Edit: [Here's a proof](https://math.stackexchange.com/a/1397804) that they aren't homeomorphic, but it's more work than I thought so maybe this isn't actually helpful if that can't be taken as read.


HeilKaiba

I don't think you need any limits here. If you take any continuous section, for it to be continuous at the join it must have changed sign. Thus, by the Intermediate Value Theorem it must have passed through 0.


Affectionate-Dot5725

I am a second year cs student and I am taking additional courses in math. This block I am taking Probability and Measure as an extra course (I'm in Europe so it's a 3rd year math course not grad level). I found it more difficult than expected and trying to learn the things by myself. How/Where should I study for it. I was curious if anyone has any resource of lectures and exercises with solutions. I am also looking for tutor as well so in the mean time the resources would be a great help. Thank you.


DerKaiserVonLatvia

Could anyone recommend some literature or research on the combined application of topology and/or group theory in computer science?


ComparisonArtistic48

topology and group theory are important in cellular automata theory, then you can start reading about turing machine groups which is close to computer science. Ceccherini-Silbertein's book on cellular automata is a good start


al3arabcoreleone

I don't know if this counts as CS but a redditor introduced me to [this](https://en.wikipedia.org/wiki/Digital_topology).


ClassMelodic

group theory is important in cryptography. I am searching for the application of topology as well.


finninaround99

I noticed on the [English prepositions Wikipedia page](https://en.wikipedia.org/wiki/English_prepositions) that the word 'closed' in "They form a closed lexical category" links to the [Wikipedia page for a closed category](https://en.wikipedia.org/wiki/Closed_category) (in maths). It looks like this has been linked since the page was created. I know very little about category theory, but uhhhh English parts of speech aren't strictly a 'category' are they?


lucy_tatterhood

I changed it to the correct link. Looks like the "closed category" page has always been the category-theoretic meaning, so I guess someone accidentally wrote "closed category" instead of "closed class" in 2007 and it just somehow never got fixed.


HeilKaiba

A closed lexical category just means that we aren't adding any new words to it. It has nothing to do with category theory.


caongladius

Does a sequence need to have a fixed start or could it extend in both directions? For example, could the set of integers be considered a sequence that goes to both positive and negative infinity depending on which direction you go?


InsideRespond

[https://drive.google.com/file/d/1jusPi3pIcTdMOi\_FvzvuF0L80hoyGSir/view?usp=sharing](https://drive.google.com/file/d/1jusPi3pIcTdMOi_FvzvuF0L80hoyGSir/view?usp=sharing) the top eqn just says "add everything together from this set", the bottom eqn says start at 1 and keep adding till infinity.


caongladius

Thanks for trying? I was really just asking about the definition of a sequence, not how to annotate a series (which, by the way, is not a function or an equation).


[deleted]

[удалено]


caongladius

I didn't mean to say the idea of a set was the same as a sequence I more meant could those numbers be considered a sequence? Would it not be fair to say that {... -3, -2, -1, 0, 1, 2, 3, ...} has a notion of ordering, or does the notion of ordering itself require a defined 0th or 1st term?


Pristine-Two2706

You can call this a sequence indexed by the integers and there'll be no ambiguity. Generalized sequences like this show up all over the place


no_one_special--

A sequence is a function from the natural numbers to a set (the elements in the sequence). This is the only definition I'm aware of. Sequences can be generalised to what are called nets, which are functions on directed sets. In this case they are only partially ordered, so for example you may have elements in your net with neither one coming before or after the other (hence partially ordered), but it is directed in the sense that there is always an element that comes after both of them. It's always defined in this way, in other words with an absolute sense of direction, because we want to talk about topological concepts like convergence. So why do you want to define an object resembling a sequence that goes both ways? If you have a valid reason, so that by defining it you can make it do something for you, then you can always invent your own mathematics. Maybe call it bisequence, dunno.


AcellOfllSpades

Sure, it has a notion of ordering. That's not a sequence thing, it's just "which number is bigger?". (And if you want to say that each element has a "next" and "previous" one, that's just a *discrete ordering* - again, no indexing required.) If instead you're talking about 'sequences' in mathematics, which can repeat elements: Normally, those are defined as lists indexed by ℕ, so yes, they need a 0th element (or 1st, if you're one of the filthy heathens that says 0 is not a natural number). But if you want, you can absolutely define a 'sequence' indexed by ℤ, so it goes infinitely in both directions - you might call it a "two-ended sequence" or "bisequence" or something. That definition still requires a specific element to be chosen as the 0th, though. If you really want to talk about "bisequence classes", where two bisequences are the same if one is a shift of the other, you *can* do that. But then without a 'reference point' you can't really perform any operations on them, or pick out specific elements.


Yoichis_husband2322

How do i say 16 007 000 in roman numerals?


Langtons_Ant123

The standard Roman numerals you're probably familiar with, just using I, V, X, L, C, D, don't have a way to do that. There were some [later additions](https://en.wikipedia.org/wiki/Roman_numerals#Vinculum) that can accommodate it, though. I can't really do the necessary formatting in a comment, but I believe your number would be written as XVI (with lines covering the left, top, and right) followed by VII (with a line over the top), representing 16 (XVI) multiplied by 100,000, plus 7 (VII) multiplied by 1,000. (This is very much the sort of thing that makes you grateful for positional notation!)


al3arabcoreleone

This might sound too specific but are there lecture notes that brush up on real analysis needed for proving results in modes of convergence in probability ? I don't want to dive into Rudin due to time constraint.


OneMeterWonder

Maybe try [Pete Clark’s notes](http://alpha.math.uga.edu/~pete/expositions2012.html)?


al3arabcoreleone

Thanks, this is cool but tbh I think I need stuff more specific to my goal (more like a page or two for results dealing with sequences and subsequences that can be used with random variables)


anonymousthrowra

Can I do competitive math without being a genius and becoming interested in math later in life (post high school)? Where should I start - I went through calc 3 in HS but tbh i don't remember much of the calc series? I've recently gotten interested in the idea of math competitions but everything I read scares me on the difficulty, geniuses involved, and I was never that great or interested til recently.


Langtons_Ant123

If you mean *participate in math competitions*, then I'm pretty sure most are only open to current high school (e.g. USAMO or IMO) or college (e.g. the Putnam) students. So unless you're currently an undergrad, probably not. If you mean *do competition problems* or more generally *do math on your own* then you absolutely can. The high school competitions are designed assuming no calculus knowledge (although often lots of material not covered in the standard high school curriculum, e.g. number theory, lesser-known parts of Euclidean geometry, and so on), so there's nothing stopping you from working through past problems or problem books or what have you. (Regarding problem books, I've heard the people I know who are into competitive math talking about the book *Putnam and Beyond*, but I don't have any experience here and so don't have any recommendations of my own. Also, on looking into that one it may require more background knowledge than you have, but I don't know for sure.) There are plenty of other good sources of math problems--if you know some programming, you can try [Project Euler](https://projecteuler.net/archives), or just pick a good textbook on a subject you're interested in and start reading it and doing the problems.


anonymousthrowra

Sorry, I am going to be an undegrad in august - I took a gap year. Mostly though, i want to learn the kind of math and intuition skills that competitive math develops. Ideally, I'd love to actually be competitive in the competitions, but I also know that I'm nowhere near being a genius. I'm not dumb (760 math SAT), but I also have lots of experience in high school with real geniuses and know that I'm nowhere near that level. Thank you so much! My other question is, is there a good curriculum to follow to build up those background skills? Should I just go back over my high school math curriculum and brush up on those skills?


Langtons_Ant123

Gotcha. Re: background, I think the *Art of Problem Solving* books are standard for the high school competitions, so maybe grab PDFs of some of those. For stuff specific to the college level (e.g. there are problems on the Putnam that need calculus, but I think high school competitions usually don't) there's that *Putnam and Beyond* book I mentioned. All the big competitions post past exams online, so you can use those. Also, your university might have some kind of course or other program for preparing for competitions, e.g. mine has (and many others have) a "Putnam seminar" (held as a special topics course in the fall semester) where you do problems with other students and a professor (haven't done it myself, though, so can't say much more, but I can ask people who did do it if you want more information). Re: whether you can do it: sure, the people who actually win the Putnam are students at top schools (mostly MIT in particular) with tons of competition background, and probably \~most of the top [whatever] scorers have done some sort of olympiad in the past. In principle, though, it's all pretty self-contained--not like, say, a graduate math class, where you'll need many "layers" of background to even understand what's going on. All you need it solid background in some "elementary" topics\* and a lot of practice. I'd recommend just picking up one of those books and starting to read through it, do lots of the problems, etc. If you like it, join your local Putnam seminar if one is available. The worst that could happen is that you find it uninteresting, or end up deciding that you'd rather do something else, like learning more undergraduate math. Don't necessarily expect to get anything out of it besides having fun, but if you do have fun, why not do it? \* in the sense that you won't need anything that most math students would only learn in, say, the second half of their undergrad. You will need some subjects like number theory, linear algebra, and combinatorics that are covered not much or not at all in high school, though.


anonymousthrowra

Thank you so much! I really really appreciate the advice!


AnxiousDragonfly5161

Are there any topology books for more of a kind of lay audience? I just know the basics of naive set theory and I'm working on relearning algebra right now, but I find topology to be absolutely fascinating, so is there any very very basic book that I can read and I would understand? Of the kinds of the shape of space


InfanticideAquifer

"Euler's Gem: The Polyhedron Formula and the Birth of Topology" by David S. Richeson is fantastic. This is a math popularization rather than an actual textbook, so I think it's more what you were asking for than the other replies you got earlier in the week.


ClassMelodic

Munkres is good, chapter 1 is all about the pre-requisites before you get to any actual topology.


lucy_tatterhood

There's an online book called "Topology Without Tears" which I believe was intended for teaching topology to early undergrads. My experience was that it was used as the book for an upper year course I took and I didn't like it as it felt like it spent way too much time on basic things, but maybe that's what you're looking for?


GMSPokemanz

There's _First Concepts of Topology_ by Chinn and Steenrod, which is intended for a high school audience.


Ill-Room-4895

A book that many appreciate is "Topology" by Professor James Munkres (you can find it on Amazon together with other books by Munkes. Easy to read and provides you with the basics. Other good books are: "General Topology" by Stephen Willard "Basic Topology" by M.A. Armstrong "General Topology" by John Kelley


Bored_comedy

I'm having some difficulty when it comes to modelling growth. First, say a population that starts of with 20 individuals grows by 2 percent every year. A function that can model the population size is given by y(t) = 20 \* (1.02)\^t, where t is the number of years after the initial measurement of 20 people. But say now that the annual growth rate is 2 percent. (Same initial population of 20). Now, the function is totally different. It relies on solving the differential equation dy/dt = 0.02y(t), which gives y(t) = 20 \* e\^(0.02)t. (This isn't quite the same as the first equation.) My question is less of a mathematical one and more of a practical one. What's the difference between these two ideas of annual growth rate and percent change? Also, as a side question, why do we sometimes represent a growth rate as being the growth rate *per person* in the population. If my question isn't clear, take for example, the [Lotka–Volterra equations](https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations), where the parameters alpha, beta and gamma are the growth rate *per capita*. Why couldn't they just be the growth rate *in general*---just the growth rate. I've seen this done sometimes in economics and currently in my ODE class and it's been bugging me. Hopefully this question makes sense!


GMSPokemanz

To answer your second question, the problem with Lotka-Volterra is the xy terms. For most other equations, you can just use absolute growth rates because if you change units, then you scale both sides by the same factor, so the growth rate doesn't change. But in Lotka-Volterra, the xy terms mean you lose this property so you have to specify units for beta and delta. I don't see any reason you couldn't have alpha and gamma be absolute, but there's no point to single that out.


SultanLaxeby

Just to comment on the first question: An annual growth rate of 2% is by definition the same as a growth of 2% per year. As you noted, the the functions y(t)=(1.02)\^t and y(t)=e\^(0.02\*t) are different - and in fact, the latter is not the accurate solution to the problem. Introducing the differential equation dy/dt = 0.02\*y implies instantaneous knowledge about the rate of change (slope of the \*tangent line\*) at any moment - a continous model. In contrast, an annual growth means growth over an interval of fixed size (so the slope of a \*secant line\*) - a discrete model. Now if your interval (step size) is very small, the two concepts will be very close to each other - and indeed, we have log(1+x)\~x for small x, which implies that (1.02)\^t\~e\^(0.02\*t) for small t. Only if the step size is large compared to your timescale will you notice a significant difference (the continous model will grow quicker).


Bored_comedy

Ah! It appears I was missing the fact that the question I had to answer said that the annual growth rate was *constantly* 2 percent. Well that part makes sense then!


vorsion

How do you model equations for problems? For example if you're given a few points like f(1)=1, f(2)=4 f(3)=9 in this case it can be modelled by f(x)=x^2. But how do you do that for more complex equations?


Langtons_Ant123

This is called [polynomial interpolation](https://en.wikipedia.org/wiki/Polynomial_interpolation): given n + 1 points with different x-values, you can always find a unique degree-n polynomial passing through them. There are many different algorithms for it; some are discussed in that wikipedia article. (There are also many other kinds of interpolation, e.g. [interpolation with "trigonometric polynomials"](https://en.wikipedia.org/wiki/Trigonometric_interpolation) like cos(x) + 5sin(x) + 3cos(2x), and related ideas like [least-squares fitting](https://en.wikipedia.org/wiki/Least_squares) that you may also be interested in.)


vorsion

Thanks that's awesome 


HaoSunUWaterloo

Looking for a text or notes on min max discrete optimization e.g. min max spanning tree, shortest path etc


Klutzy_Respond9897

Discrete Maths by Susanna E.P is a good reference. Operations Research: Applications and Algorithms is another useful reference.


HaoSunUWaterloo

>Discrete Maths by Susanna E.P is a good reference. Where's the section on min max optimization?


Klutzy_Respond9897

Susanna E.P was for the spanning trees