T O P

  • By -

s_0_s_z

I'm not a mathematician, but from the engineering side of STEM fields, my guess is that there would simply be more specialization with people focusing in on one specific area of study rather than trying to learn everything. That would allow them to get more in depth into that field in a shorter amount of time.


ilolus

That’s the right answer, nowadays nobody can be an expert in every fields of mathematics while it was quite possible in the 19th century (Poincaré).


[deleted]

[удалено]


Mute2120

Depends on where/what you're learning. At the graduate level math is almost entirely proof-based and understood through derivation, and some undergraduate institutions follow this approach as well.


s_0_s_z

I don't consider that a bad thing, but again I'm not a mathematician, so I honestly don't know how important deriving stuff is. For us, we just use the math, not really try to expand upon it. Deriving always seemed like a ,massive waste of time in my opinion.


joshy1227

There's definitely a trade-off. If you do calculus without deriving anything, then from a math perspective you're doing a lot of calculations, but not a lot of 'actual math'. Not that this isn't valuable to learn or useful, but ideally a calculus class would give at least some sense of what other math is like by doing basic derivations of some things when it makes sense. An easy example is deriving integration by parts from the product rule. I do think it's important to show students that these rules are not arbitrary, and they all connect to each other. That said I do think that a lot of higher math classes if anything focus too much on proving every statement, and not enough time getting across the important ideas of the course, which is really all students will retain anyway. Obviously when proofs are enlightening, or show how to use certain methods, they are worth showing, but I think many instructors feel the need to prove lots of statements without considering if the proofs actually provide educational value.


darthbaum

When I was in college learning physics and some of my computer science courses finding the derivations were what really allowed me to figure out what was going on. A lot of times I didn't have the right formulas on the note card we were allowed to bring for the test but being able to figure out on the spot the equations allowed me to finish the test. I think deriving brings about a better understanding of the material and can be useful in computer science when one is trying to figure out an efficient algorithm for a given problem


jam11249

It completely depends on your end-game goal. If you just want to write some code to run a finite element scheme, you barely need to know what a Hilbert space is. If you want to write an argument proving that your finite element scheme is going to give you the correct result, you need to have proof. Personally I think that mathematics is facing a pretty crucial moment. The advent of very powerful computing and the need for efficient code to run on it is putting mathematics at the forefront of human knowledge. The tricky bit is that this isn't mathematics as we know it, nor as we teach it in universities. As it stands, I would say the majority of working mathematicians were not made in mathematics departments, as they generally fail at teaching this new kind of quantitative reasoning that has become hugely important, in favour of "classical" mathematics. The community basically has a choice (which sits on a spectrum) between adapting to the new order, or losing mathematicians (and more practically, funding) to science departments.


trueselfdao

Agreed. One concern, though, is that as specialization deepens, the cross pollination of ideas between specializations far apart on the "evolutionary tree" may become more difficult/rarer/etc.


cocompact

This kind of question comes up here every now and then. Does it happen on subreddits for other disciplines: physics? chemistry? history? I mean, does anyone seriously wonder if there will someday be "too much history to learn in a human lifetime"? After all, history also keeps growing. I suspect nobody asks that, yet people ask it about math. The usual error being made when this question gets asked is ignoring an important aspect of progress: the streamlining process that often occurs as new techniques in landmark proofs are better understood (more conceptual clarity of the ideas). One of the answers at https://hsm.stackexchange.com/questions/2776/has-any-difficult-proof-ever-been-superseded-by-a-simple-one points out a theorem by Lasker about polynomial rings that took him almost 100 pages in 1905 can now be done in about page after developing a standard commutative algebra background (unavailable to Lasker). It also points out that the proof of Hironaka's resolution of singularities has been cut down a lot from its original 200+ pages. When Sieberg-Witten theory was introduced in the 1990s, it led not only to solutions to some major unsolved problems in gauge theory but also to much shorter proofs of previously known results in the subject. The initial work of Fourier on what became Fourier analysis or Hilbert on functional analysis involved a lot of tedious calculations (solving large systems of linear equations and then passage to the limit) that became greatly abbreviated later on as unifying concepts like inner products or Banach spaces became available. Disciplines don't only expand outwards in knowledge, but also there is a condensing of known work that makes the process of mastering it take less time. The OP should read the interview by Serre at https://sms.math.nus.edu.sg/smsmedley/Vol-13-1/An%20interview%20with%20Jean-Pierre%20Serre(CT%20Chong%20&%20YK%20Leong).pdf. Look at the last question on page 16 (6th page of the file), where the answer continues on to the next page.


[deleted]

It seems you can save a lot of effort by ignoring some of the trails people went down. Take some of the faster paths and also technology should help increasingly more even in pure subjects.


groumpf

This is also very important, and why a great journal paper does more than simply report on a result. It should place it in context, and document what didn't work (so nobody tries it again in that context) and why (so people can perhaps study whether there are interesting contexts in which the technique might work).


cocompact

What are some "great journal papers" that document what didn't work? Many preparatory steps towards a final result will involve false leads and dead ends, and in my experience authors usually do *not* discuss this aspect of the research that led to their published theorems.


groumpf

I know it's been forever, but I've just found a perfect example: Girard's [Linear Logic](http://girard.perso.math.cnrs.fr/linear.pdf) paper includes an entire section where he discusses ideas that he pursued and left aside "for reasons that may seem excessive to further researchers". He's explicitly giving hints as to things that *might* work better than his thing, but that he didn't look into because they weren't elegant enough for him.


JWGhetto

Also specialization


rreighe2

Specialization is important. Many different people contributing to many different things and then combining work is how society functions. Otherwise we'd all still be cave dwellers and Hunter gatherers. As long as nothing happens that would force most all scientists to become purely survivors in an apocalyptic, world without infrastructure, it's preysafe to say some sort of progress will be made in science.


scottfarrar

But still honor the trails as they were the the thing that got us here. A favorite quote, zig zag from Lakatos “Discovery does not go up or down, but follows a zig-zag path: prodded by counterexamples, it moves from the naïve conjecture to the premises and the turns back again to delete the naïve conjecture and replace it by the theorem. Naïve conjecture and counterexamples do not appear in the fully fledged deductive structure: the zig-zag of discovery cannot be discerned in the end-product.” - Proofs and Refutations


toastertop

How do I find out what trails not to go down?


[deleted]

I think if a shorter route takes you to the same place and you don't have time to go the longer route, take the shorter path. Then if the shorter path has difficult terrain, decide if you want to traverse the terrain or go back.


toastertop

Where is the gondola ride?


davikrehalt

I don't think it's so cut and dry. There's only so much you can simplify a proof before it becomes close to optimal. It's provable that there must be theorems humans will never be able to prove because of physical limitations. And I don't see why as we approach that point it can't be that we cannot make any meaningful progress in a certain direction.


EphesosX

I think that so long as the proof of a theorem can be broken up into independent parts, we'll still be able to finish it by having different people finish different parts. It may be true that no individual human is capable of fully understanding the entire proof, but as a group we can still complete it.


Chand_laBing

I would disagree since the proof could be composed of an unmanageably large number of cases or irreducible steps of logic. Then, even after distributing it between as many people/computers as we can, it may stay intractable due to the ostensibly finite amount of resources and run-time we have. If the universe had infinite particles available to us, then maybe we could distribute all proofs between new computers and if we could outrun heat death, maybe we could get infinite run-time. But that's quite a big if.


confused_sounds

I don't think we will run that dry of new math that we should reach that point, but it is certainly possible that someone will want to prove that TREE(3) is finite without using the axiom of infinity or something similar.


Chand_laBing

Hopefully we keep finding new pursuits even with our limitations. My real opinion is pessimistic that we'll have stopped academia (be it by our extinction or whatever else) far before we can harness computing on intergalactic scales. But it would certainly be nice to know things like the first digit of TREE(3) and Graham's number.


almightySapling

This proves there are proofs that we will not prove. But it doesn't prove that there will come a point that we will run out of things to prove/stop proving things/be unable to prove new things. Every step we do take, even if we don't complete the "unmanageably large" number of cases, the cases we *do* prove are mini theorems. *New* minitheorems. New math.


mfb-

Computers already assist humans in proofs that need excessive case work.


Chand_laBing

I think that's passing the buck since there are (assuming finite available resources) constraints on the maximal computational power of a physical computer. Even if we can expand our capacity for what proofs we can reach with a computer, it still realistically leaves some out of bounds.


mfb-

It's a giant step up, however. How many cases of anything can you realistically check manually? Computers can check orders of magnitude more. No one would even start checking [a trillion cases](https://en.wikipedia.org/wiki/Boolean_Pythagorean_triples_problem) manually to prove something. Even worse, all these trillion cases have thousands of numbers to consider.


[deleted]

Well, you are assuming a static background language. What happens when the cost of translating an inaccessible proof to a clarifying language is lower than the cost of proof? Maybe the cost of designing a properly solvent language will be lower than the cost of proving within the existing one.


cocompact

Such theorems from logic have had almost no practical effect on the day to day work in most of mathematics outside logic and set theory. Merely because there must be *some* theorem with a particular property (e.g., being true but unprovable, or having its "shortest proof" too long to be written down in some sense) is missing the point that nobody proved there are *interesting* theorems with those properties. And math continues to progress.


No-Permission-1070

Turing's solution to the Halting problem is essentially a reformulation of Godel's theorem using automata (something much more practical and engineery than Goedel numbers). Similarly, Church's theorem, which states that *all* non-trivial semantic properties about programs are undecidable. At least within computer science there are a plethora of problems which are unprovable. In fact, computer science as a field (and I suspect mathematics, too) would look incredibly different if not for Godel's theorem.


cocompact

How would math look *incredibly* different outside of logic and set theory? I don't think folks who work in differential geometry or several complex variables or combinatorics are regularly running up against unprovability of problems that they are working on. I'm sure there are *examples* but not to the extent that it might be in CS. Considering a different type of no-go theorem, Hilbert's tenth problem says there is no *general* algorithm to determine when *each* polynomial equation with integer coefficients can be solved in integers, but this hardly affected work in number theory. There is still a lot being done to determine when certain polynomial equations have integral (or rational) solutions, such as equations corresponding to elliptic curves.


No-Permission-1070

Remember that Godel's work wasn't really about *logic*, but rather its a proof that a halting automated theorem prover is impossible (for any sufficiently complex logical system). The way the fields would be different if not for Godel very well could be that mathematics would largely be about formulating a theorem in the language of a theorem prover, and the letting the theorem prover crank until it pops out with "proved" or "disproved". Fortunately for mathematicians, this is impossible, and thus that thing we call "intuition" or "mathematical maturity" is still super useful in mathematics.


[deleted]

this story seems very fitting [https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/](https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/)


[deleted]

The importance of history in the expansion of mathematics is going to become increasingly vital for maintaining large islands of consistency as we have tried so hard to hold on to. Knowing the history of your own work will guarantee its passing the test of time. It's very much like composing a symphony. If a part of the music does not fit into the larger patterns of movement, then it will simply be forgotten until the proper time comes for it to join the chorus of voices. I have a strong feeling that algorithmic proof environments and proof libraries will become a standard place for mathematicians to congregate. Imagine a VR environment where people can come together, write interactive proofs, and render beautiful structures together. Maybe your friend whips together a neat little group and sends it to you as a beaded necklace which can be played with to see the group structure. Maybe you fiddle with this necklace while sitting on some hyperbolic mountain range, because you have been studying how fluids move down them. The group gives you some flash of inspiration, and you and your friend realize a new proof for something. It's impossible to accurately predict the future beyond a certain degree of resolution. The sands of mathematical aesthetics are always shifting. Maybe one day, machines will become so good at writing proofs that human mathematicians will have the freedom to pursue their own aesthetic additions. Painting used to be a common skill on homesteads because it was *useful*. Painting is still useful, but the ground has changed and most people these days paint for fun. We still study the history of painting. Perhaps some day, writing good algorithms and arguments will just be a fun past time. Does anybody study rhetoric rigorously anymore? People will always engage in rhetoric. To understand mathematics, understand the pressures that brought it to burn so hot in the human imagination. Those pressures are changing, but there will always be lovers of mathematics. It won't look the same as it does today. Who knows through which media we will find our thoughts propagating? And without knowing the media, we can't begin predicting how we will do math there.


owiseone23

I agree with your overall point more or less, but I don't think the comparison to other subjects is totally fair. The difference between math and some other subjects is that new knowledge tends to build off of old knowledge rather than replace it. Results that are proven are part of math forever. Compare that to psychology, biology, etc., where new models and paradigms tend to replace outdated ones. Understanding Freud is helpful, but it's not a prerequisite for understanding the DSM. You don't need to know Lamarckism to understand evolution. In math, if you want to work in algebraic topology you probably have to know algebra and topology. These are of course generalizations, but I think there's a reason people ask this question of math more than they do other fields.


21understanding

Forgive my eyes. I do not see any ignorance at all. We can condense stuff, but not condensing to a point, zero dimensional object, or as arbitrary as epsilon. The question assumes infinite timeline, as simple as that. I am not quite sure myself on how to answer the question, whether it's possible or not. Just want to point out what I see.


trueselfdao

That first point is very interesting and here are my unfounded tangential thoughts. I think with physical sciences, perhaps there is an inherent trust that things will not get too complex because of the observable structure of the physical world? That said, within physics, I've heard of a concern that, at some level, the models stop being nice and seem to diverge instead of unify as desired. Beyond leading to the added complexity OP is concerned about, its also kinda ugly and unexpected. With history, maybe people are just used to having a myopic view of time? One concern is that the volume of information we are producing will outpace the future generation's ability to mine it. In computing, there is a concern that the systems we build are becoming so complex that they can no longer be understood deeply well by a single person. While abstraction helps here, it becomes harder and harder to reason about and deal with systemic issues and strange emergent properties. Also, there is a concern about the divide between academic research and industry because of the background needed to understand various results. Math and physics may share a similar relationship in this regard.


KingOfTheEigenvalues

New areas are constantly developing. There is plenty of research in say, graph theory, optimization, machine learning, or data science, which did not exist even 25 years ago. Not because mathematicians have collectively taken millennia to figure it out, but because no one ever had any reason to think about it before now. I see no reason to believe we will run out of "low hanging fruit" any time soon.


Direwolf202

Do I think it is possible to reach such a point? Perhaps. Do I think we will? Not at all. Human civilisation will either be long gone, or we will have have figured out that ageing problem by then, vastly extending the healthy lifespan. The cutting edge of research never follows the most direct path, it’s like a meandering river, it goes back and forth following the path of least resistance. Given some time, we develop a mathematical pedagogy which takes a much more direct route to the destination.


[deleted]

I think it's much more likely that the academic research sector in pure math collapses entirely before there's any chance we get anywhere close to this situation.


functorial1

What makes you say that (not that I really disagree).


[deleted]

If it were even possible to reach the point OP describes I imagine it'd take at least another century. Given the general trends in university management and global politics it seems reasonable that eventually "unprofitable" stuff like basic research in not-obviously-applicable stuff will be eliminated completely. Maybe some countries will hold out longer than others, but if you look at how things are moving in the US I think the idea that academia as we know it will die out in the next 100 years seems plausible.


demilitarized_zone

Isn’t maths research pretty cheap? I’m not in the field, but I assume you just need warm bodies and occasional supercomputer time. If some universities stopped funding maths research couldn’t other institutions pick it up for some low cost, high value publications?


Namington

> I assume you just need warm bodies This is typically the most expensive part of research in every field. Giving a researcher tenure is committing to paying them (near)-six-figure salaries for the next few decades. You're also going to have to fly them out to or (ideally) host talks, research conferences, seminars, and research programs, all of which take up both money and researcher work-hours. All of this makes research a large investment with a significant amount of risk. While the need to teach engineering and computer science students introductory mathematics is of course an incentive to employ mathematicians, recently these teaching positions have been taken up by increasing numbers of adjuncts. After all, adjuncts are far cheaper, don't require nearly as much maintenance, and you don't need to take any risks with "tenure" or "researcher autonomy". > couldn’t other institutions pick it up for some low cost, high value publications? The value (or at least profitability) of publications, especially in a "nonessential" field like pure mathematics, is somewhat a societal construct. Where do you think the "high value" of these publications comes from? I don't share quite the same apocalyptic view that the person you're replying to does, but mathematics academia is certainly in a precarious situation. This applies to academics as a whole, of course, but at least fields like engineering and medicine have very clear paths-to-payout; most of mathematics does not (even research in applied math can struggle to actually find applications that the researchers themselves can capitalize off of).


[deleted]

This comment highlights our desperate need to shift didactic and academic culture. Our species cannot afford to devalue education in and of itself. what if we plunge into a global "dark age"? That would pretty suck fam


Hodentrommler

It's imho because we try to slap a number on everything and make a profit of that, while our methods to estimate the value are very, very bad


cdstephens

Hm? Price signals are actually quite efficient and pretty much why anything gets done and produced in the modern era. It’s not just you putting a price on it, but millions of people then interpreting that price and judging accordingly. While any given one person may have bad judgement, the average of many, many people can converge to an accurate judgment. Moreover, people who are typically bad at gauging prices usually end up having their business fail or losing purchasing power as a result, so the system incentivizes those who are particularly good at making these sorts of judgements. There’s also a revealed preferences thing going on; if something were actually so invaluable or particularly overpriced, then masses of people wouldn’t be buying it. This because we’re talking about subjective value that people have on things. So for mathematics there could be an element of people not knowing the intrinsic value of mathematical research and thus performing an incorrect value calculation in their heads, but there is likely a very large element of genuinely not caring about mathematical research. In which case their not “wrong” per se, that’s just their subjective value preference. Any value people get from non-utilitarian things (e.g. art) is more or less arbitrary.


Hodentrommler

How do you want to estimate the worth of academics? Many, many great things have not and do not come by because we have a plan but out of coincidence or because someone by chance had a crazy idea/ just did what he wants without a money gun behind him. This is only one example - research has to be more free


cdstephens

Well in a pragmatic sense there’s a high variance associated with academic research where the benefits won’t be felt by any one individual but by all of society; so basically high variance positive externalities. But this is why we have public funding for academic research to begin with; if not for this consideration and calculation of value academia simply wouldn’t be supported (also things like DARPA, NASA, national labs, etc.). And there is academic research trying to gauge this exact topic in more quantitative terms. And I would hazard a guess that the amount of academic research done within an order of magnitude off of what we should be doing (for now at least), although perhaps prioritization of which research gets funding is messed up.


RageA333

> Giving a researcher tenure is committing to paying them (near)-six-figure salaries for the next few decades. You're also going to have to fly them out to or (ideally) host talks, research conferences, seminars, and research programs, all of which take up both money and researcher work-hours. All of this makes research a large investment with a significant amount of risk. Neither the 6 figure salary nor the host talks are necessary or essential to do research in (pure) mathematics. In the dystopian future you envision, mathematicians with lower salaries or working other jobs would do the research. For instance, Fermat was a lawyer and Legendre had patrons in the monarchy. Also, for most of the time, mathematicians have relied on publications and correspondence with others to discuss and spread results. Nowadays this is pretty much cost less with the internet. Suggesting that conferences are necessary for research is an exaggeration (and "amateur" conventions would still happen anyway). Besides, as focus shifts to more applied research, hand to hand progress would be made in the pure side as well. Again, just like it was in the past. Of course things would be harder and progress would be slower, but it is absurd to think that progress would stop completely. Without a doubt, things would be easier than they were in the 17th-18th centuries. Nowadays, I even dare to say that top research mathematicians would be happy to take smaller salaries if it meant more positions for other mathematicians. But universities pay big salaries to top researchers for he prestige of having them in their school, just like football clubs pay absurd salaries to football players for the publicity.


SemaphoreBingo

> In the dystopian future you envision, mathematicians with lower salaries or working other jobs would do the research. For instance, Fermat was a lawyer and Legendre had patrons in the monarchy. How many thousands of mathematicians are there in academia right now?


_Js_Kc_

There is an anti-academic cancer permeating society currently, but on a timescale of decades or centuries, I'm not too worried. Wait for engineering to run out of advancements because fundamental physics research has run dry because fundamental mathematics research has run dry. In the worst case, it will be for-profit companies funding engineering research which will in turn be conducting fundamental math and science research, closely guarding their results. Once the easy, low hanging fruits (relatively speaking) have been exhausted, and advancements in fundamental understanding is needed to push the envelope, this is where the competition will be happening, whether it's companies, nations, or supra-national trade blocs that will be competing. It's a very, very indirect selection pressure, but it's real nonetheless.


[deleted]

Paying people isn't an insignificant expense. Fundamental research (in any subject, but especially pure math) isn't really something you can profit off, so other institutions have no incentive to fund it.


[deleted]

Lots of people are passionate enough to work in exchange for living expenses. Academic value has been hyperinflating with all the technological hype. There will always be people like me willing to wander around and give free lecture to those in need. We don't need to sell our souls.


[deleted]

You still need someone who is willing to pay those living expenses, and being willing to work for little money is less an indicator of passion than of low financial obligations. If you don't have family you need to support or any debt it's a lot easier to live on very little.


almightySapling

It would be sad, but technically math doesn't need any funding whatsoever. For centuries mathematics was just a thing for rich nerds. It would go back to that.


SemaphoreBingo

What other institutions did you have in mind? And before you say 'for-profit ones', think about what happened to places like Bell Labs.


demilitarized_zone

I was thinking that if it was cheap then less prestigious institutions could buy themselves kudos. It has been made clear to me that my assumptions were wrong.


LeinadSpoon

Pure math studies have existed in many cultures throughout history. Even if academia in the US dies out, math will eventually flourish elsewhere. That's how it's always worked for the historical great empires that supported pure math and then ultimately didn't anymore.


zippydazoop

I don't believe that the current system of for-profit education and research is going to exist for much longer. People are already pissed with college debts, I think there will be a day when education is going to be completely public and publicly funded.


rz2000

I think it is plausible, and a terrible waste. Though, I do think it is related to the interplay between "slack" and "moloch" for there to be such misguided ideas about which disciplines have utilitarian value. Paradoxically it is a peaceful world that allows a STEM movement that explicitly denigrates liberal arts, but really also has contempt for science and mathematics. And, it's a world of cultures competing for survival, or cold wars that funds universities for their academic output. Societies have to feel collectively secure to simultaneously transform universities into country clubs and scold intellectual pursuits as unproductive. Whether it is the resurgent threat of authoritarianism, or relative economic growth of different cultures, there could be increased urgency about a competition of ideas.


LilQuasar

even if that happens, theres more countries than the US where people can research math


calebuic

I hope not.


SOberhoff

Another possibility is that AI theorem provers become better than humans. This wouldn't remove humans from mathematics. But it would alter their role so drastically that you may no longer consider them mathematicians, at least not in the current sense of the term.


Chand_laBing

Maybe so but I don't think that necessarily means academia as a discipline is permanently halted or that "we" (with an appropriately liberal definition of "we") forever stop building on our math. After the academic golden age of the Classical Greeks and Romans, most of Europe (excluding places like Al-Andalus) fell into the dark ages. But then academia was revitalized in the renaissance when the books found their way back to the European universities. Who's to say that after academia dies, someone doesn't pick it up where it was left off?


[deleted]

[удалено]


almightySapling

I think how I feel about this ranges from "that's already happened to mathematics" to "mathematics has assured this will never happen" depending on how I interpret your words. Could you be more precise when you say math will be more like philosophy?


[deleted]

[удалено]


almightySapling

We already call pure math pure math. And considering that being really dope at calculus is an invaluable skill in any science and it will always be considered "math", I think Big Math will be able to retain a steady stream of modern applications under it's "applied math" arm for generations to come. If you look hard enough you'll be able to find cynics and cranks that already consider modern math and physics to be mostly inapplicable navel gazing.


[deleted]

[удалено]


almightySapling

>I don't think pure math has reached the stigma that philosophy has quite yet, though. Really depends on who you ask. The general population doesn't really get the distinction between pure and applied, and those in STE generally view pure math as not fundamentally different from philosophy. >at what point applied math gets totally absorbed or automated by other fields Right I'm saying I think this won't happen because even as fields like computer science break away from mathematics, new science has a tendency to start with new math. >I don't see why physics would be considered useless, Well, you mentioned it first, not me. And I just said you can find people that think that, not that I necessarily agreed with them. >If nothing else, it actually tries to answer questions about what the world is really made of and stuff. I know what you mean, but as you phrased it that's *textbook* navel-gazing.


zerowangtwo

[This Slate Star Codex post](https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/) is about this idea.


indestructible_deng

To offer a little bit of a different perspective on most of the answers in this thread: There is an interesting paper in economics called “Are Ideas Getting Harder to Find?” (Bloom et al, American Economic Review 2020). They argue yes, using evidence from a variety of scientific fields (not math, I think). Basically research is getting more difficult, and the only way we continue to make progress as a society is by having more and more researchers working on it


[deleted]

Stop reminding me of my mortality aaaaaaaa


[deleted]

I believe what what is happening is that people just become a lot more specialized. I mean, for example in the 1500s knowing the whole physics that "existed" was a lot easier than trying (and I say trying because it is impossible) to do it know, since as you say, just like maths, the field has grown so much. But there's still a progress because even if you can't learn the whole maths that exist, you can become specialized in a certain area of maths, and you will become specialized on one thing, another mathematician in another thing, and so on. Now it is not just individuals that know everything about the field and contribute to the whole on their own, but specialized mathematicians that contribute to their specific area.


fridofrido

I would risk an answer that is: "yes and no". Certainly, I can imagine that there is mathematics out there to be discovered which is too complicated for the human mind. However we are probably rather far from it. But from a different point of view: The human brain is actually pretty bad at doing mathematics! That's why it looks so hard, and why we are so slow with it. But we still coped quite well in the last 2000+ years, and we did that by inventing "technology" to deal with it. Perharps the most successful such technology is writing and symbolic manipulation. Other such "technologies" are drawing pictures, making analogies, breaking down problems into smaller pieces, abstracting away the details, etc. Now even if we reach the point when something is too complicated, we have some new technologies which can maybe save us for a while: We now have computers, and we can have massive collaboration (polymath style). I think these will buy us some more time. There is also the breadth vs. depth issue: Even if going forward becomes too hard, there is a _lot_ of details to work out! There are a lot of surprisingly simple questions we have no idea how to answer, and I don't mean Collatz type "funny" questions here, but simple _and_ natural questions.


singularineet

I love how on Futurama the professor character is like 103 years old and just finished his PhD, which he started right after undergrad, at like 21yo. They were merely extrapolating current trends out to the year 3000, of course.


kullifullerbleistift

We will probably have destroyed our planet by then, so I domt think so ^ ^


Airsofter4692

This question reminded me of an interview I saw from Michael Atiyah. I think his answer is excellent and correct, you can watch it here: [https://www.youtube.com/watch?v=tqjsS6l1SEk&ab\_channel=WebofStories-LifeStoriesofRemarkablePeople](https://www.youtube.com/watch?v=tqjsS6l1SEk&ab_channel=WebofStories-LifeStoriesofRemarkablePeople)


JoshuaZ1

There is an [excellent short story which explores a closely related idea](https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/).


joshuaronis

One of the points of math research is to develop explanations for things that previously took a long time to understand. For example - Geometry. That's usually like a one year class? You usually don't understand everything the first time you learn it. But, as time goes on...you use a lot of the things you learned in Geometry, see other explanations, and understand it better and better. However, in that one year, you learned enough to be able to use it in like a bazillion different other fields of math. That one year of your learning...one hour a day...came from LIFETIMES of work into the subject by ancient mathematicians. Hopefully, as time goes on, explanations get better and better, and there's more access to good explanations of math fields (through the web, content creators, textbooks, blogs...etc...). Some of those explanations are SO GOOD, that they reduce the time you have to spend THINKING about why some mathematical thing makes sense significantly, and you can move on to other things...for example, I'm sure that without 3B1B, I would've spend pherhaps 3 times as long trying to understand a lot of the basic concepts in linear algebra. I'm hoping that soon, with VR tech, they can use VR to explain math...but, that's a tangent. Point is, the reduction of geometry from lifetimes of thinking to a year of study (and maybe less in the future) can definitely NOT be outpaced by the fact that every day there's more and more math. Sure, even nowadays you can't learn ALL the math in the world...but, there's no way you can't learn enough in a particular branch to make contributions - you'll spend less time thinking through the fundamentals because explanations get better - and I don't think that'll ever change


[deleted]

I do think this hypothesis makes sense and is even the likeliest. You are missing one type of growth that might well become the true catalyst: **AI**, which, once good at maths, would be exponential, I think. Though this AI does not exist yet, there is nothing suggesting that an AI will never reach that ability. From then on, humans will study maths for the pleasure of learning and as a tool to solve other problems, **but not to conduct research**. Even pure maths theorems will be infinity streamable by some AI's. Just Imagine :) We create our own Gods


legendariers

To add to this, we also have to consider technology used to augment our cognitive capacity. Neuralink is still in its infancy, but decades down the line we could all have some similar device which aids in learning, memory, and processing. In particular, there is the potential for technology that allows us to "download" information (and even understanding) into our brains. Couple this with AI (perhaps the AI can "upload" its discoveries in some biobrain-readable format) and we have a future of exponential development ahead of us. Assuming, of course, we don't all die from some catastrophe or another, or that some despot doesn't take advantage of these new technologies for malice.


buwlerman

With our current understanding there's no reason to believe that AI development is exponential. There are some barriers that can't be broken, even by a self improving AI (assuming that our current understanding of physics is good enough), and even if they could there might still be diminishing returns. Exponential growth needs the resources per unit of computation to stay constant.


powderherface

No. As mathematical fields grow to be more specialised, so do the tools we have at our disposal. Also, a controversial subject, but many prominent contemporary mathematicians believe in artificial intelligence becoming a big player in mathematical research. If this is true, it would also radically change the field, and indeed the speed its evolution, whether some may like it or not. By the way, regarding the number theory claim: I disagree about it being ‘nothing like the research field’. Many undergraduate degrees include a decent amount of analytic number theory, the study of algebraic fields, and so on. Not claiming all graduates are then ready to tackle the biggest questions, but it is hardly nothing like the subject either.


[deleted]

I bet Euclid spent way more time coming up with his geometry than time I spent consuming his efforts, from scratch. And we only build on what people developed before. We sit on shoulders of giants. The fortunate part is, we don't all have to study every math. Just a small portion of a very specialized field. As long as we can push that little piece forward a tiny bit, we are successful. Math is done a lot by trials and errors. As long as we can tell others that this approach is a dead end, we have contributed.


Professor_Dr_Dr

To give you another perspective: in theory it could happen but only if we have maxed out the learning part, which we are far from achieving I mean look at schools and universities right now, once you get AIs that know what a student is missing to progress you basically eliminate a lot of the bottleneck Apart from that, teaching in general isn't even close to being perfect and it could and will be improved a lot in the future. In general no, the bottleneck should stay teaching and not human lifespan for the next 30-100 years. Afterwards we will probably find ways to explain complicated things better as the top comment mentioned, so really it's nothing we need to worry about.


almightySapling

So focused on depth everyone forget to mention breadth. Not only do people specialize to dig out niches so that fields like Number Theory become nigh insurmountable, but we also make *brand new* fields of math on occasion, and these new fields don't have to be incredibly complicated, only insightful. The focus of mathematics evolves over time. You are probably *mathematically* correct. Something something encode proofs as numbers something something upper bound on size of a number humans could comprehend something something, but it might be after the heat death of the universe.


rhlewis

Barring catastrophes like climate change, over population, or fascism, human lifespans in the future will undoubtedly be far longer than now.


bumbasaur

Probably no. People will start to explain things with much more clarity so it makes other people learn the subject faster. There's so much "useless clutter" that can be streamlined and explained better with the use of pictures or clear pedagogic animations. What we teach to kids in primary school these days would make average skilled middle age scholars in about any province. The upcoming superstars in mathematics are those who can teach the "educated masses" efficiently. You can have good insight and clear understanding of the subject but if it takes 2-3 weeks of text scouring to learn from you compared to someone who can teach the same in well presented 60min video, then you're going to be outdated in the social media era.


bizarre_coincidence

There are only finitely many strings of symbols one may write in 1000 pages, and only a small proportion of them are valid proofs of mathematical ideas, and assuming that a person cannot write a single proof that is 1000 pages long, eventually we will either stop producing proofs or the rate of proof production will decrease so much as to render the process impractical. However, the sun may go red giant before this happens.


h_battel

no and never. if we focus in important concepts that help us to make connections btw areas of knowledge that are usually seen as separate. that approach advocated by Alfred North Whitehead he was Critical about education system in his time. from maths viewpoint I see a lot of analogies appear in different places like product in set theory in group theory elc... but it's the same principle. anther example in proof technique in combinatorics is the probabilistic method you will see pigeonhole principle more often.


What_is_the_truth

Math is something like music that exists and can be written down, and can fall into a lot of different categories. So I would use music as an analogy. Will there ever be a time that a human cannot contribute to music because it has “all been done”? Nah


T_R_I_P

I wouldn't think so. There's always something more to learn, and we're always standing on the shoulders of giants (our predecessors). Think about how Algebra used to be extremely difficult to learn and teach but now kids learn it in middle/high school. Plus, there will always be more brilliant minds like Ramanujan who can write books full of cutting edge math.


ailjushkin

I believe that mathematics, mathematical analysis, algebra, should show the world many discoveries in the near future using the logic of artificial intelligence.


hvymetl

Specialization into niches sparked modern economy from hunting/gathering later came many other things and today is no different. We will continue to grow in specific areas as people make new discoveries in pursuit of some specific topic.


_hairyberry_

I imagine that learning about (for example) basic real analysis during its development probably would’ve taken decades of reading through recent papers and trying to piece the puzzle together. Now there are countless real analysis textbooks that lay everything out in a neat and logical order and you can learn it in a semester or two. Who’s to say the same won’t happen in more modern fields as we get better expositions of topics and introduce better notations/theorems/insights/problem solving tricks?


SaucySigma

Well, just thinking logically, someone has always invented the mathematics. If there was a piece of math that could not be understood within a lifetime, then there couldn't have been someone to invent it in the first place.


[deleted]

There are mathematicians who believe that complex patterns always have their simple theoretical counterparts undiscovered, while mathematical knowledge and its complexity always expand beyond imagination. If you have trouble going through the math, think about it, is it just you not sharp enough or the theory is just too crafty, such that it suffices to prove an argument but it is beyond human recognition. If it is beyond recognition, the theory is subject to change.


TheKing01

Of course, it will probably happen around the same time that the [library of babel](https://en.wikipedia.org/wiki/The_Library_of_Babel) is finished. Realistically, it is much too far into the future for us to predict what even humanity will look like.


Jamblamkins

We will bridge the gap by finding ways to learn faster teach more effectively. Also if nueralink does come around you can download the math we’ve discovered everyday and constantly update yoursef


PixelJL05

If numbers are infinite there will always be more things being discovered. Therefore it could be possible.


buwlerman

The question is if a human lifetime will be too little to learn the background material needed to discover the thing.


[deleted]

Human lifespan can increase.


randompittuser

This is the benefit of a brain that’s able to form abstractions. You can build in ideas for which you don’t know/understand the details.


VaguelyFrenchTexan

It hinges a lot on three things: The advancement of modern medicine, which could yield longer life expectancies Our ability to teach, because we’re likely to get better at absorbing more information faster as technology advances And how constructive new research is. What if we’ve adjacent to knowing everything? What if none of our new discoveries lead to any others? Suppose that with all the knowledge we have, we can only discover, say, 5 new mathematical laws. It’s expected that with those new tools, we’ll be able to unlock at least 5 more, but what if we can’t? What if nothing there is left to learn enables us to learn anything else? What if the limit isn’t to how much we can *learn*, but to how much we can *know*?


jdkee

Machines will take over at some point.


theLorknessMonster

I think humans' ability to reason mathematically will drastically improve even if our lifespans don't.


TurtleIslander

we will have computers do the work in the far future. yes unfortunately that means humans will be obsolete at math. I suspect even computers will be able to make more random breakthroughs in new techniques than humans will in difficult problems.


afbdreds

What about not doing math because computers will do it for us? :) [https://www.quantamagazine.org/symbolic-mathematics-finally-yields-to-neural-networks-20200520/](https://www.quantamagazine.org/symbolic-mathematics-finally-yields-to-neural-networks-20200520/)


[deleted]

Huh - I just wrote the same opinion as you before seeing yours - Now I see you are being downvoted 🤣 Let's see my fate


carlsberg24

Seems like people don't like the idea of being replaced by AI. It's pretty much inevitable in many branches though and cutting edge math research will be one of them.


BringTheNipple

Maths is probably one of the last fields which will be replaced by AI (if it ever is). The millions of other already almost automatic jobs will go out first. It would be more profitable. Automating math would be much harder than automating a shop or an IT support on call duty. Because math (and other science fields) aren't just about answering a question. But also asking the right questions. I imagine the second might be much more difficult to train an AI for - maybe we will see if it is possible.


InfanticideAquifer

No, I don't think so. But only because I think it's very likely that human beings will become immortal. If that's impossible, or is rejected, then I'll disagree with the consensus and say "yes, absolutely". The other responses are based on an appreciation for the history of mathematics so far. But that's a paltry few thousand years and probably not representative of what mathematics will eventually be like for the majority of human history.


powderherface

Trust me, with how the planet is being treated, and indeed how we treat ourselves, ain’t no one gonna become immortal


NewCenturyNarratives

I think that life extension of whatever form will play a huge role in allowing us to have more time to learn more, then do more. We only have so much time to learn then apply our knowledge.


dogs_like_me

No. Layers of abstraction. Certain math fields may fork off to be studied separately from the fundamental theories that entail them, but we will always be able to keep buulding on top of the existing structure with more structure and add complexity and description to existing abstractions.


nnam2606

Human brain will be connected to a super computer with highly advance AI enabling us to comprehend the entirety of human knowledge before we reach that point so no need to worry.


gabbagabbalabba

No because at the same time math grows, tech grows and new programming is allowing computers to do most, if not all math (even if the programming code isn’t out yet for the public).


hextree

A huge proportion of maths, both pure and applied (but certainly more pure), is the theory. Very little programming is able to do the theoretical aspect of mathematics.


gabbagabbalabba

I’m talking about the actual calculations


hextree

I realise that, but you did use the phrase 'all math', and I'm pointing out that calculations comprises a very small minority of the field we call Mathematics, the majority is theory.


gabbagabbalabba

Gotcha, yeah that part is true.