Michael Harris has a new article at quanta. The piece is (uncharacteristically?) coy, referring to the laments of two logicians without divulging either their names or their precise objections, making oblique references to a cabal of 10 mathematicians meeting at the institute, and making no reference at all to his own significant contribution to the subject. But that aside, the piece relates to one of themes from Michael’s book, namely, what is mathematics to mathematicians? In this case, the point is made that mathematics is decidedly not — as it is often portrayed — merely a formal exercise of deducing consequences of the axioms by modus tollens and modus ponens. More controversial, perhaps, is the question of what number theorists stand to gain by a massive investment in the formalization of mathematical arguments in (say) Lean. (I “say” Lean but I don’t really know what I am talking about or indeed have any idea what “Lean” actually is.) As you know, here at Persiflage we like to put words in people’s mouths which may or may not be a true reflection of their actual beliefs. So let’s say that MH believes that any thing produced by such programs will never produce any insight — or possibly not in anyway that would count as meaningful insights for humans (if a computer could talk, we wouldn’t be able to understand it). KB believes that without the promised salvation of computer verified proofs, modern number theory is in danger of shredding itself before your eyes like that Banksy. What do you think? Since everything comes down to money, the correct way to answer this question is to say what percentage of the NSF budget are you willing to be spent on these projects. Nuanced answers are acceptable (e.g. “as long as some really smart people are committing to work on this the NSF should get ahead of the curve and make it a priority” is OK, “better this than some farcical 10 million pound grant to study applications of IUT” is probably a little cheeky but I would accept it if you put your real name to it).
Let the battle begin!
(Photo credit: I went to Carbondale to see the solar eclipse, but all I saw was this lousy sign. The other is just a random web search for “vintage crazy pants”.)
Michael and I are having a face to face face-off on the 20th June in Paris (in a philosophy department of all places): https://mdetlefsen.nd.edu/philmath-intersem/philmath-intersem-10-2019/
Will there be a youtube video of it afterwards ?
Apparently not.
So perhaps I should leave a more careful response, laying out my side of the story more clearly.
I think that asking the generic question “how much money should the NSF be spending on formalisation” is not quite the right question. Areas of mathematics go in and out of fashion depending on how successful they are currently being, how many interesting and partially-accessible open problems there are, and so on. It’s clear that the NSF should be spending money on, say, arithmetic geometry, because the field is currently moving very quickly, problems are being solved and one feels like there is some sort of slow but genuine progress towards major conjectures like modularity of etale cohomology of algebraic varieties over number fields (I say “slow” because one has to occasionally remember that potential modularity of abelian surfaces over totally real fields represents 0 percent of abelian varieties over 0 percent of number fields on the one hand, whilst being a significant breakthrough on the other as it is a case where Hodge-Tate weights are coinciding in some serious way; we all know that this is exactly what progress can look like).
But the fact that the area is active of course does *not* mean that money should just be thrown at random projects because they have the word “arithmetic geometry” in the title of grant proposal. One should fund projects by researchers who perhaps have shown some real potential in the past, but more importantly whose proposals contain ideas which the experts assess as being both important and reasonably likely to have some chance of making progress within the required time frame. A great *non-example* of such a proposal would be a complete formalisation of a proof of Fermat’s Last Theorem. I am lucky enough to be able to have some insight both about what the full proof entails, and how hard it is to formalise mathematics on a computer. So I can see that formalising the proof should certainly be possible in theory, but would take hundreds of person-years in practice, and furthermore would teach us nothing important about the ideas which went into the proof (I regard the possibility that there is a hitherto undiscovered gap in the proof as being extremely unlikely; of course formalisation would have a very large chance of uncovering such a gap, but I regard this as a very weak argument for formalisation of FLT).
However not all formalisation projects are of this nature. Tom Hales’ Formal Abstracts project https://formalabstracts.github.io/ is proposing something which will take a long time but which is manifestly both (a) much broader and (b) much more useful. Mathematicians know the usefulness of MathSciNet — a big database of reviews/abstracts covering essentially every paper in every respectable maths journal. Creating this database required time and effort, but it happened. A formalised database of mathematical objects, facts about them and relations between them — there is no reason at all why such a database could not exist. It will facilitate search for mathematicians — imagine a future where you don’t have to pester Brian Conrad about whether some result about rigid spaces is true because you can actually look it up. Because the database will not contain proofs, but just formalised statements and references to the dead tree literature, this cuts out the “hundreds of person years” bit. And because it’s formalised, AI algorithms from the future will be able to look at our database and make some suggestions about other theorems. At the beginning it will be crap. But I am old enough to remember when Wikipedia was crap for maths. The internet can make things happen. Because of the specialised nature of the project, it has the potential to go far far deeper than Wikipedia and into serious modern mathematics. The analogue of a MathSciNet reviewer is someone who knows Lean’s computer language — or the controlled natural language which will sit on top of it within a few years and which will be human-readable — and formalising the statement of a theorem can be done in a matter of minutes if you know what you’re doing.
For decades now I have been mentioning the following idea to people. If one human being suddenly knew what every human being knew about mathematics, how much more would they instantly be able to see? This idea came to me when I proved Ribet’s level-lowering theorem for p=2 (the day after returning from Glastonbury music festival). The proof was simply a realisation that Ken’s original proof also worked for p=2, modulo a couple of tricks which were already known to the experts. I emailed Ribet and Taylor sketching the argument and got immediate responses from both, each saying “I thought the problem was here” for completely different values of “here”. With a computer database of interesting mathematics one can begin to dream that such realisations, discovered either by humans or by computer, might become more likely to happen. But for this database to become a reality we need to have a culture change in mathematics departments — we need to invite computer scientists into our departments and *show them what we are working on*. If we don’t do this then they will just keep making newer and better provers and then using them to verify Godels incompleteness theorem or the other random bits of maths which they picked up in computer science school. So this is where I think some money should be going — into small projects where mathematicians and computer scientists can work together formalising bits and pieces of mathematics, enough to make all *definitions*, so that the project can get off the ground. I’ve done perfectoid spaces (with Patrick Massot and Johan Commelin — Massot will also be in Paris on the 20th) and in my mind this proves beyond all doubt that it can be done with the tools we have today. I don’t see any obstructions to defining the etale cohomology of a scheme, and Shimura varieties will take longer but are definitely within reach. The theory of computer proof systems has moved on substantially in the last few years and the sooner the mathematical community wakes up to this idea the sooner it can start moving even further. I find this whole story an extremely appealing picture and one which I think can change our subject for the better.
The remarks Kevin marks about level lowering for p = 2 suggested an interesting exercise that I assigned to myself: what were the two specific ideas and who was missing what (since I don’t nail the first part down exactly I don’t speculate on the second)?
My first attempt at the exercise involved trying to do everything from memory. (Of course many of the ideas here are related to later work of Wiese and then to my paper with Geraghty so they should be familiar to me.) Taking a slight tangent, the concept of an “idea” in mathematics is a peculiar (and fuzzy) one. On the one hand, “ideas” are a sort of currency in mathematics — when evaluating a piece of mathematics, one often talks about the result having N ideas for some value of N which is usually pretty small. (Usually N = 0). Secondly, the authorship of ideas is a very fickle business — almost everything new has echoes of things which have come before. Finally, there are ideas which take a great deal of creativity to find but once given are transparent — these are often under appreciated after the fact. (I have constant debates with one of my co-authors on this point which I may get back to another time. I think his basic claim is that everything I have done is obvious in retrospect; without necessarily conceding this point, I try to argue that this of itself is not a disqualification for being judged competent.)
Anyway, back to the question at hand. I only once heard Ken give a series of lectures on level lowering. My mind is fuzzy, but I seem to remember they were held in the old lecture hall at MSRI, possibly around 2000. I think that year there was an MSRI hot topics conference on Artin’s conjecture, but I’m not sure if these lectures were directly related to that conference or not. (I don’t think so, I think instead Ken was at MSRI for the semester.)
The first thing that springs into my mind is: one idea was the use of auxiliary primes q satisfing:
$latex \rho(\mathrm{Frob}_q) = \left( \begin{matrix} 1 & 1 \\ 0 & 1 \end{matrix} \right).$
I guess the point is that when p = 2, it’s hard to find primes odd primes q which are not 1 mod 2, so the primes are used instead. As Barry Mazur is famously quoted as saying, “just add Gamma_0(q) structure”. Of course, the point is then one has to prove a variant of Mazur’s geometric level lowering with Gamma_0(q) structure in this non-scalar Frobenius case.
Mazur’s argument uses a number things which now have to be modified. The first is multiplicity one. So I think Kevin must have proved multiplicity one in the new case when Frobenius at 2 has repeated eigenvalues and is non-scalar. I think the key point here is looking at J_0(N)[m] as an extension by two modules (the etale and ordinary part related to the Hecke algebra and its Z_p-dual) and exploiting the non-“diagonal” nature of the associated Hecke operators at 2. So I think seeing that this could work without assuming Frobenius has distinct eigenvalues was also a key idea. Then there is the problem that Mazur’s argument also uses that q = 1 mod 2, and here I’m not sure exactly what the point is but I’m guessing that again because of the “non-scalar” choice of primes one can get things to work. I vaguely remember the argument exploits the action of the decomposition group on the “non-finite” part which is severely restricted because it looks multiplicative, and I guess the non-scalar matrices above are enough to force the representation to be in the finite part. Sorry for being wishy-washy you can always look up the papers themselves.
I was just going to produce the original emails — but to my surprise I don’t seem to have kept them. My backup of old emails goes back to 2000 but not before, and I am pretty sure that this was in 1999. Of course I can’t remember the details myself now 🙁 but I do remember that multiplicity 1 was avoided because of Ken’s paper about level-lowering without multiplicity 1, and he gave some lectures in somewhere like Montreal where he explained how to come up with auxiliary primes when p=2 by using non-scalar Frobenii. Sounds like I need to read my own paper again…
IMHO, Harris is being a bit too idealistic by implying that a proof is always “the point of departure for an open-ended dialogue”. Sometimes, when you look up a theorem in the literature, you’re wanting to understand the ideas in the proof because you want to pull it apart and reassemble it in a new way. Other times, though, you just want to know that the *statement* is true, because you want to use it in some other work: you just want to know that X is true, not to understand why X is true so you can prove X + epsilon (or X^{1000}). I feel that it’s this situation where it’s easiest to get stung by errors and omissions in the published literature.
So, if you ask me, Michael’s article is missing the real issue: it’s not the big milestone theorems like FLT that are the problem — it’s the run-of-the-mill theorems, which don’t get subjected to that sort of microscopic examination (but still sometimes get quoted and used elsewhere). How many of the 10 authors at the IAS gathering had read and sceptically examined not only all the proofs in the Wiles and Taylor-Wiles FLT papers, but all the proofs in every other paper quoted in those papers? At some stage, you have to be prepared to trust the published literature, and at the moment it’s not clear that you can.
(I was initially a bit sceptical of Kevin’s Lean crusade, but there was a precise event which changed my mind. I had a PhD student working on a problem involving p-adic L-functions — details omitted to protect the guilty! — and I ended up spending a rather depressing evening writing my student a long email along the lines of “the relevant papers here are A, B, C, D; A is correct, but B has an error in section N which invalidates everything after that, C is wrong for a different reason, and D is also wrong through no fault of the author because the main proof relies on a false statement quoted from B. The best statement that is presently known, using the subset of these papers that is correct, is XXX”. I feel that this is a pretty shameful situation, and I’m hoping that computer proof verification can save us from this kind of mess.)
A question from a non-mathematician: how widely is it known that some specific papers have problems? In your example, are the errors in B, C and D generally known to people working in the field, or would (almost) everyone need to find that out by themselves? Are even results of ‘human verification’ of (non-famous) proofs communicated well enough?
The errors are “known to the experts”. Possibly not to the journal editors though.
I fear that computer proof verification is currently far from saving us from this kind of mess. It was some sort of analogous story (frustration with unjustified claims, and assertions by smart people that turned out to be false) which led me to have the mid-life crisis which has ultimately got me thinking about formal proof verification. What I found is that these systems are not yet ready to solve the “problem” that both you and I both believe exists, and which some people do not believe is a problem.
What I have found instead is a bunch of super-powerful tools, currently being used by computer scientists to do (a) verification of program correctness and (b) verification of random bits of maths, most of which is 50 or more years old and accepted by the community, and most of which is about elementary objects (indeed the most famous theorems to have been formalised in these sytems are long theorems about primes, planar graphs, finite groups and spheres). In short, what I see is *potential* for this stuff to change maths, but there is a long road ahead of us. What I am personally doing (formalising stuff like schemes, group cohomology, perfectoid spaces etc) was intially seeing whether these systems (Lean in particular) can actually even handle these more complex objects. Turns out they can — but nobody has used these systems to do anything with these objects because the number of people on the planet who understand both the systems and the objects can unfortunately currently be counted on the fingers of a small amount of hands.
Pingback: Various News Items | Not Even Wrong
Frank has gently provoked me to battle with Kevin but I am not tempted to take the bait, and I can reassure readers that participants in next week’s philosophy seminar will have to pass through an acrimony detector as well as a sarcasm detector before entering the seminar room.
For the record, I have already written about the topic at https://webusers.imj-prg.fr/~michael.harris/androids.pdf (early draft of the article in the book “Circles Disturbed,” published by Princeton University Press, which too few of you have purchased). Please let me know if one of you perceives anything in that text as clear-cut as an “actual belief”.
>Acrimony detector as well as a sarcasm detector before entering the seminar room.
I guess I’m not going.
I just saw the exchange on Peter Woit’s blog and I have to say that the comment by Past-life mathematician at http://www.math.columbia.edu/~woit/wordpress/?p=11053#comment-234161 is a very clear formulation of what might be my position if I had one. I would have been comfortable with his expression “category mistake,” and if Quanta readers can be assumed to be familiar with the term I’m likely to use it in the future. I also share P-l m’s belief that “there is a certain tendency in human beings (but mathematicians especially) … to deny as much as possible the existence of the intuitive and the non-formal” — I’m not sure about the “wish” part and I don’t want to get into an argument with Freudians. But I would never go so far as to assert that “the two groups are not actually learning the same thing” — that would amount to an “actual belief.”
Reliability of the literature doesn’t require any formalization, just a permanent open bulletin board where one can comment on publications. Open source software platforms sufficient for this have existed for (at least) about 20 years. The funding and effort needed are tiny compared to extant formalization projects.
It’s almost as if the research community does not want large-scale glasnost on its literature and Full Formalization is a conveniently expensive distraction. Instead of allowing the problems to surface (as would be inherent in the easy and cheap solution), rebuild the entire structure at 1000x the cost and grant funding, and declare victory over the previously hidden problem after it has been massively over-solved. Notice how even on MathOverflow, where it would be trivial to clone the platform as a MathLiteratureOverflow, there is resistance to using the site for errata, and a fully enforced taboo on discussing the merit of particular papers.
Oh god no. That would simply be a magnet for idiots and trolls. I wouldn’t trust MathOverflow to referee a recipe for ice cubes let alone a paper. And I’m not sure the problem is so much that errors are known to exist (which is true in some papers) but more about the errors which are not known in papers which might be used but not necessary read very carefully. As MathOverflow demonstrates, crowdsourcing doesn’t always work in mathematics for certain tasks (although it does for others).
This isn’t crowdsourced refereeing, it is having one-or-more sites whose pages are discussion nodes for publications. Certainly refereeing — whatever that might mean —- could be one purpose for one such site, but more imminent uses are errata reports and questions and answers about contents of the publications. I doubt very much that trolling presents a serious or hard-to-surmount problem in such context, especially where authors are likely to register as users and answer comments about their work (as they often do on MO).
Heck, if you are worried about trolls, why not have an organized site for *authors* to self-report errata, do Q & A on their own papers, add post-publication updates, etc? Like the use of LaTeX, the incentive is for most authors to eventually use such a system, as long as it’s provided. These are very cheap and low-tech approaches compared to formalization but they solve a big chunk of what is proposed as the main selling point of formalization.
p.s. about MathOverflow, I have often wanted to write a blogpost or MO thread on the Unwisdom Of Crowds, i.e., a big-list of MO questions that got lots of attention but wrong or misleading answers, or the wrong answers were much more popular than correct ones.
However, in connection with literature correctness the question is not comparable to “how reliable is MO” but rather “does the existence of MO have a positive or negative influence on the reliability of literature (for MO-using readers of that literature)” and there it is clearly a big positive. Even if there is, let’s say, a 10 percent chance of exclusively bad answers when posting on MO, the ability to ask at all is enormously helpful and makes the literature more reliable than the total absence of an MO. Similarly, if there is a fixed but not too large probability that some paper gets trolled on a math literature discussion platform, the mere availability of such a platform is likely to have a large positive effect if it is adopted to any significant extent, and as adoption occurs there are ways to suppress the trolling or make it ineffectual.
The MO model suggests that almost all questions will be stupid, and of the very few that remain, there most likely won’t be anyone willing or able to answer them. I can’t imagine what it is about the MO model which suggests to you that there will be experts willing to spend the hard hours resolving issues in papers of the kind mentioned by David Loeffler, in particular reading all the references as well. About the only person with both the incentive and ability to answer technical questions is the author (although often not even then), and in the current system you can already email them.
Of course, I strongly encourage authors to make their papers publicly available and make their errata publicly available as well but I honestly don’t think your suggestion is really going to have any impact on the real problem.
Probably “trolls” is not quite the right word — that indicates a certain level
of self-awareness — but some toxic mix of cluelessness and ego. Again, I think that detecting these errors is just much harder than you make it out to be, and I haven’t seen anything at all on MO that comes close to doing anything like what is required.
>does the existence of MO have a positive or negative influence on the reliability of literature
I see zero evidence of it having any influence at all on this question.
openreview.net, predominantly adopted by the machine learning community, is working pretty well as far as I can see. As a example, the paper that introduced BigGAN: https://openreview.net/forum?id=B1xsqj09Fm. Ample opportunities to ask questions: public comments and official reviews are both allowed.
“Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Recommendations. Open Directory. Open API. Open Source.”
https://openreview.net/about
Open Discussion: Hosting of accepted papers, with their reviews, comments. Continued discussion forum associated with the paper post acceptance. Publication venue chairs/editors can control structure of review/comment forms, read/write access, and its timing.
OpenReview.net is built over an earlier version described in the paper Open Scholarship and Peer Review: a Time for Experimentation published in the ICML 2013 Peer Review Workshop.
We gratefully acknowledge the support of the OpenReview sponsors: Google, Facebook, NSF, the University of Massachusetts Amherst Center for Data Science, and Center for Intelligent Information Retrieval, as well as the Google Cloud Platform for donating the computing and networking services on which OpenReview.net runs.
https://openreview.net/group?id=OpenReview.net/Support
If you would like to use OpenReview for your upcoming Journal, Conference, or Workshop, please fill out and submit the form below. Please see the sections below for more details.
This is how the form looks like: https://drive.google.com/file/d/1CPRVBTx0x9ZzuCqOUjc4UDqpSB-1OjwT/view?usp=sharing
These “openreview” links discuss papers which are on a structural level so far removed to what current arithmetic geometry and algebraic number theory is like that they are completely irrelevant.
In the current system, experts are *already* spending large amounts of time sifting the material in published articles and discovering problems large and small. The proposal is to have a convenient locus for them to share their findings with others, instead of the status quo that includes:
– wasted effort from multiple experts independently repeating the same checking (since none post their results),
– less- or non-expert readers giving up,
– a reliance on folklore whose accessibility and existence attenuates with distance from the centers of learning,
– even in the centers, a critical dependence on a tiny number of uber-experts with encyclopedic understanding of foundations and literature who serve as de facto oracles on what’s what.
The last in particular is a ridiculous situation and, as is implicit in your comment on the difference between arithmetic geometry and machine learning (and Kevin Buzzard’s comment on not having to ask a human oracle) scales very badly with the complexity of dependencies in the literature.
When I was in grad school, there was no MathOverflow, and one of the papers that I wasted lots of time puzzling over was a long and dense article on arithmetic geometry. As it turned out, the central construct in the paper (a very promising thing for many applications) puzzled many other people, because it was critically wrong, but the exact reasons why were subtle, the repairs difficult and un-obvious, and public statements of the problems did not appear for 15 years after the publication. At the time I assumed it was me not understanding something. For political reasons I could not, or thought I should not, ask the author. Recently, I saw that someone asked about the same problem on MO, and got an answer and a link to a paper resolving this longstanding issue. I’d call that an instance of MO improving the literature, and having such things be systematized per-publication and not as patchwork of random MO questions would be even better.
@random reader: I rather like the idea of a “name and shame” website, however in practice I guess I can see that it will be completely chaotic in places and one could imagine that for every useful contribution there will be several others which will degenerate into squabbles; I guess one would need a team of moderators. A lot of people maintain errata to their papers on their own website anyway (or just update the papers on e.g. ArXiv, although this has the problem that people are still going to cite the journal version). I agree that the current system is unsatisfactory; I also agree that formalisation won’t fix it (at least in the near future). One thing that formalisation can offer is that a formalised theorem leaves no room for ambiguity (“are we still assuming that p is odd in this section? In the introduction and the main theorem it was odd, so it’s needed at some point in this paper, but I’m not sure where”). I think it would be harsh to criticise a paper for not explicitly saying which results need p > 2, but conversely I’ve seen examples where people quote theorems in situations where the author did not mean for them to be applied, because of ambiguities in the original paper being misunderstood by people wanting to cite it. Here one might find people blaming each other for the resulting error.
Rather than a new website, could we not make better use of Mathscinet? The site already tracks erratum that are published by the authors of a paper, but other problematic papers have reviews that affirm their results. Surely a review on a paper can and should point out if it’s wrong!