Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Richard Feynman’s Integral Trick (2018) (www.cantorsparadise.com) similar stories update story
267.0 points by TheTrotters | karma 604 | avg karma 3.51 2021-06-15 16:23:42+00:00 | hide | past | favorite | 142 comments



view as:

This is fantastic. I’ve tried several times to understand this idea over the years, with no success. This clearly expressed the idea in only a few minutes.

One question: It mentions that Wolfram alpha will fail on integrals that this trick can work for. Is that just because it will time out (we need more compute) or is the trick difficult to automate?


A little bit of both. I don't think WA uses the full Risch algo, but even then, enough pattern matching rules https://rulebasedintegration.org/ beats it in perf.

Differentiation under the integral sign only works for certain well behaved functions and isn't easy to automate since you now need to figure out where to parametrize and you don't have good structure theorems to help you.

IMO contour integration is a more powerful and easier to intuit technique.


Amazing website. I love that I could just click ahead and find the .pdf transcripts of the tests ran.

Unlike differentiation, indefinite integration is undecidable even for elementary functions so there will always be some limit to what an algorithm can compute.

One thing I've often wondered: Is there a reason to learn all of these methods in the modern era, when Wolfram Alpha or Mathematica can apply hundreds of methods automatically?

It's good to understand things conceptually. But once I got the concept of integrals as the area under a curve, it felt like a lot of grunt work to learn so many tactics for solving them. But most of my focus has been on computers rather than pure mathematics. For pure math, it probably makes more sense to learn as many different methods as possible.


Mostly agreed, but I did like the last sentence: sometimes it is easier to solve the general problem than the specific one.

That is a useful trick to keep in mind all the time. The point of course, is not to learn to solve integrals, but to learn a transferrable piece off mental jiujitsu.


It seems yes there is a reason if you need integrals. From TFA: "You can also try having Wolfram Alpha compute it, and it will time out. We will need to be more creative."

Mathematica/Maple/Sagemath don't have a freemium timeout mechanic and can solve a lot more. Truth be told I think that integration techniques are much less broadly crucial for everyone to learn than they used to be, although you need to have some clue of what's going on because you need to be able to guide yourself towards posing problems in such a way that the integrals that can be solved.

> Mathematica/Maple/Sagemath don't have a freemium timeout mechanic and can solve a lot more.

Mathematica/Maple don't have freemium timeouts because they are not free; Sagemath, OTOH, is a good point.


But there are integrals that you can easily solve by hand but both WolframAlpha and sage will (effectively) timeout on them. And I’m not even talking about something made deliberately hard for computers to symbolically analyze.

Really? I've never seen one, could you give some examples?

For Wolfram Alpha, I just tried the integral from https://en.wikipedia.org/wiki/Contour_integration#Example_2_... and it times out. The by-hand solution is pretty straightforward, if you know contour integration.

This is when 99.99% of the population would just whip out a numerical solver.

It's going to take you a while to numerically solve that integral for the uncountably many values of \alpha that you are being asked to...

It's probably best to return a function that takes alpha as a parameter and numerically integrates for whichever finite set of alphas are required by the caller.

The usual method is to computer a reasonable number of solutions and do curve fitting.

It's the algorithm in and of itself that serves as the solution to the integral.

Sometimes you are looking for deeper insights in some equations that can only be achieved by finding symbolic solutions.

You would never figure out that black holes are a solution to the Einstein field equations of you just threw a numerical solver at it, for example. (Bad example because that's arguably the easiest solution to them but I hope you get my point.)


0.01% would whip out a numerical solver; 99.98% would go "huh?"

Of course there is. It's vital to check your calculations in some way or another, and cross checking with other humans that know what they're doing ought to yield the correct answer eventually. I suppose this is mostly useful if you find nobody that knows how to use library X, and everyone uses Y, so the only other practical option to cross check are other humans.

If you don't know how a calculation is performed, depending on what you're doing, there's between a chance and a good chance you'll look at an error and not know it. Everything from bugs to typos to just using the wrong method for the job can cause you grief, and having an idea of what's going on makes it far easier to spot.

I have been on both sides of that, and far prefer to know what I'm doing.


Speaking as someone who is completely useless at integrals except for the basic undergrad-level tricks, yes, there is.

Rewriting formulas in different forms can allow you to see analogies between them, allowing you to prove "mini-theorems", which you can use to make computations more efficient or to adapt slightly different mathematical tools to your problem.

Those things happen frequently even if you are a "just" a developer, (not necessarily with integrals/real analysis, for example combinatorics tricks are extremely common), but it's definitely a nice tool to have.


If you have any interest in master's level engineering, you cannot get by without a strong understanding.

I took a single master's course as a deal to get my bachelor's and that was Random Signals and Stochastic Processes. Wow, you cannot get these concepts without a super strong mathematics background.

To this date I think it was both the hardest and most fulfilling course I took.


Absolutely, because after a while this helps you internalise and see errors in other people's work, that everybody else is just letting flow past them.

Even being able to do simple arithmetic in your head to check for errors in slides/talks is a major step up. It will only take one moment of realising that the person talking has made a mistake in their calculation, and is basing their argument on that mistake, to realise the power of this - and that's just the simple stuff.


This is the same line of thinking a lot of ‘coding boot camps’ seem to take- why learn 4 years of computer science and engineering fundamentals when you’ll just be using node.js in the workplace?

A dangerous trap IMO- both are valuable but incredibly different. The former is teaching a narrow-scoped trade as opposed to learning a full-fledged engineering discipline. The latter is much more generalizable and equips you to understand / build / use most tools going forward, rather than overfitting use only to the current fad of high-level tooling.


The counter point to that I can offer is that you can kill whatever spirit exists to learn it deeply if you throw someone into the deep first, and then deeper. The will to learn gets lost, whereas being effective can fuel wanting to be more effective. Human energy has to be paced, and will cultivated.

>> being effective can fuel wanting to be more effective.

I rather like that statement. Going to use it.


I don't think the two are so analogous. A problem with placing a high value on finding exact closed forms for integrations is that it encourages behaviour like the drunk looking across the street from where he lost his key because that is where the streetlight is. Most integrations just don't have closed solutions and we have to live with that fact.

Bayesian statistics has been liberated by the ability to perform many dimensional integrations on the kind of likelihood functions appropriate for the problem, where before the advent of modern computational techniques Bayesian statistics had a reputation for concentrating on artificial problems that we happened to know how to solve.


For me it's in-between: I like to understand the basics but happy to use numerical algorithms for everyday use.

However, when I'm working on something that heavily uses a specific mathematical method I like to dig in and deeply understand that aspect, otherwise you can become dependent on other peoples implementations that may not be optimal for your use case.


Not sure, but I've had computer generated symbolic differentiations that the sw was absolutely not able to boil down to the compactness of the result I was able to come up with manually. It was really only useful as a test to verify my own result.

If you just want the value of an integral, maybe not. If you want to understand the integral, or manipulate it in different ways, yeah there's value. Frequently, a computational solver will spit out some giant expression, while if you did it manually, you'd end up with something more compact, perhaps making new intuitive definitions along the way (generally manual common subexpression elimination and factoring).

One of the goals of science is to explain the world.

* While computational tools will symbolically solve a lot of integrals, they won't solve them all. Resorting to numerics often means you have lose some understanding along the way, because you no longer have a closed form expression to analyze.

* One general strategy in Physics is to take a complicated expression and make different sets of simplifying assumptions to reduce it to simpler forms. This adds explanation to your model because you understand how the system is said to behave under different limitating scenarios. But if you are not adept at manipulating complicated expressions, you won't be able to use the strategy fully. Computer solvers are really bad at writing mathematical expressions in the nicest way possible so that the simplifying assumptions pop out naturally.

Full disclosure: I am a physicist, who uses Mathematica quite a lot to solve various expressions (but I know the limitations of the tool).


Your second point is very interesting! Can you point me to any problems of that type? I’d be interested in learning how to make a mathematical model of something, and then simplify parts of it with various assumptions.

The one I can think of offhand is the pendulum problem, where sin(theta) is approximately equal to theta for small values. But you made it sound like there are problems with multiple parts, and many different simplifications.


There definitely are. The simple pendulum (or rather harmonic oscillators) are the problem we want to transform harder stuff to, because we know how to solve that one. In other words, you might have to make a bunch of assumptions till you have reduced your problem to simple harmonic oscillations around a steady state that you have found separately.

Check out Prandtl's Boundary Layer Theory https://en.wikipedia.org/wiki/Boundary_layer

It's a set of smart physical observations/assumptions that allow you to find closed form solutions for the Navier Stokes equations in fluid mechanics https://en.wikipedia.org/wiki/Navier–Stokes_equations for the important case of flow close to some body, such as an airfoil.


The article says wolfram alpha times out on this problem.

The free version of it does but the Pro version will calculate it.

If you can understand all the nuances and special cases of a concept through one-shot learning, go ahead.

I’ll disagree with everyone else here. There’s not really a good reason to understand them, no. It’s good to have a broad understanding of what tricks are out there and to get a general sense of what techniques might work where, but no reason to know them in depth or learn all the tricks.

Computer Algebra is actually not great at solving integrals (and the problem is unsolvable in general). But it’s not extremely common that one needs to symbolically integrate gnarly expressions and you can look up the tricks when it comes up.

Much in the same way that I believe introductory linear algebra is bogged down by endless matrix computation without a computer, I think forcing students to compute a million different gross integrals quickly has diminishing returns.



If you don't have the background to get this, here's a quick tutorial on integrals https://cognicull.com/en/1dc797za , and it would be cool if cognicull included this Feynman's method in their ontology.

Really cool site, thanks

A very interesting way to show information and to aid learning. I haven't explored it much, but I do feel like this could be great, but isn't quite fleshed out yet. I think the tree is very difficult to navigate without being able to see what each bubble represents before clicking on it. Also, since the site contains so much content, it would be nice if it could remember what you have understood (and maybe still show it grayed out, instead of the permanent-seeming deletion of nodes).

Interestingly, they have precisely that functionality, where you can "prune" nodes as you learn them, and scrolling down the page gives very good articles on each topic. It was an HN post a while ago and I revisit it to look up refreshers on math concepts in articles like this Feynman one.

Right, I saw that, I was just nit-picking the "prune" UX. I'd far prefer to gray things out, not delete them for what seems to be for good.

i’m so relieved i don’t have to do math like this anymore

I think that when you're doing it for recreation or to practise problem solving it's probably a lot nicer than at school.

I've forgotten how to do most of it and when I look at it now it's hard to believe I was ever able to.

I recall learning this for an entrance exam and it was a right proper nightmare.

Fifteen years ago and still terrifies me.


Same. I was honestly getting anxious looking through all of that.

This reminds me of how much I struggled with integral calc in college. My textbook (Stewart) had a table of integrals containing 120 forms that you'd need to solve the problems in the book, and looking through them, the calculations seem so insurmountable.

Like, https://www.wolframalpha.com/input/?i=integrate+u%5En+sqrt%2...

I looked at that and realize that I'd have no future as a physicist and switched to CS.


IME, Stewart is pretty opaque. The intro section is a great refresher of things you'll need, but the remaining text is pretty muddy. I found an old Thomas Finney book that was much clearer, as was Richard Delaware's YouTube series.

Same textbook. The amount auxiliary material I needed to watch off YouTube to grok it was very real. I had one of those impenetrable profs that only explained calc in the most theoretical terms.

I think the teachers who choose Stewart books don’t necessarily know how to teach down very well, so that would probably be reflected in their lectures, too. In multi-variable calculus I had a Stewart book and the professor put me to sleep. I spent a lot more time figuring it out on my own than I should have.

I was too naive to do so, but if anyone out there is in a class and the official suggested book doesn’t help you, ask the Internet for a respected alternative.


I had good teachers and Stewart and did pretty well (2001-ish, not sure how that translates, just one counter point - sorta)

We also used Stewart, and I think struggling (or excelling for that matter) at integral calculus is a poor indicator of one's competency in higher-level math, which is much more than memorizing 120 rewrite rules :)

My AP calc teacher was big into Leithold, specifically TC7 (https://www.amazon.com/Calculus-7-Louis-Leithold/dp/06734691...) when I had him. Claimed Stewart was useless, but he had a lot of strong opinions. I don't really have anything to say except that I have this feeling that I'm supposed to preach the gospel of TC7 anytime it comes up, so may as well :). Haven't touched calc in 10 years but sometimes still find myself trying to remember the chain rule (usually when miserable on a run)

This paper is an old favorite of mine. It shows how to transform a program that computes the value of a multivariate function into a program that computes that value and all of its first derivatives. The resulting program requires at most 7x more instructions as the original. Spoiler: it's just the rules of differentiation, and the constant 7 comes from the quotient rule.

https://courses.cs.washington.edu/courses/cse446/18wi/slides...


> but sometimes still find myself trying to remember the chain rule

There's an algebra of differentials that was formalized quite late (I think at the 19th century) but accepts all of the operations you can use for scalars. The chain rule is just fraction simplification.


How do we see that f(1) = 0 in the example? That claim is equivalent to:

int_0^pi ln[1 - cos(x)] dx = - pi ln[2]

Is this easy?


Because the integral is then over ln(1) and the log of 1 is 0.

No, that is saying that f(0) = 0 which is correct but not what I asked about (nor what was stated in the article).

Incidentally, your correct observation is in contradiction with f(a) = 2 pi log(|a|) since the latter only holds for |a| >= 1. This is because f(a) is not a smooth function.


Yeah I can't immediately spot something that would make it easy.

It's much easier to prove that the difference between the integral of log(a^2 - 2 a cos(x) + 1) and 2 pi log(a) goes to 0 when a goes to infinity.


As far as I can tell, it's not trivial, but can be derived by repeatedly exploiting symmetries of the trig functions and properties of logarithms.

One observation is that by symmetry

int_0^pi ln[1 - cos(x)] dx = int 0^pi ln[1 + cos(x)] dx

and so the calculation is equivalent to

1/2 int_0^pi ln[ sin(x)^2 ] dx = 2 int_0^pi/2 ln[ sin(x) ] dx.

Now we can use a similar trick again:

int_0^pi/2 ln[ sin(x) ] dx = int_0^pi/2 ln[ cos(x) ] dx

so

4 int_0^pi/2 ln[ sin(x) ] dx = 2 int_0^pi/2 ln[ sin(x) cos(x) ]

= 2 int_0^pi/2 ln[1/2] dx + 2 int_0^pi/2 ln[sin(2x)] dx

= - pi ln[2] + int_0^pi ln[sin(u)] du

(using substitution u = 2x)

= -pi ln[2] + 2 int_0^pi/2 ln[ sin(x) ] dx

and so the original integral is -pi ln[2].


I think it's not obvious but it reduces to a relatively 'common' integral.

1 - cos(x) can be written as 2*cos^2(x/2)

Take the logarithm and you get 2ln (cos(x/2)), which is relatively common and the solution is https://www.quora.com/How-do-I-integrate-log-cos-x-from-0-to...


In Germany there's a proverb: "Differenzieren ist Handwerk, Integrieren ist Kunst" - differentiation is craft, integration is an art.

Best exemplified by x^x. Differentiation is tricky but doable. Integration is impossible.

Depends on your definition of "impossible" ;-)

In high school, I transformed x^x into e^(x log x), expanded that into the Taylor series, and integrated term by term. I got a "solution", but it wasn't closed form - it was an infinite series. And, for a given error limit, it probably converged more slowly than a decent numerical integration. So, not worth much. But I "solved" it...


In English too. I remember my high school calc teacher saying this (except IIRC it was differentiation is science, not craft).

Relevant xkcd: https://xkcd.com/2117/

While the technique is powerful, it may also not work : there are conditions to check to be allowed to differentiate under the integral sign.

A lot of undergraduate math programs in the US start with unnecessarily hard calculus classes "weed-outs" which is unfortunate, since it discourages students who might have pursued mathematics otherwise. I can say from personal experience that Calculus II was my worst math grade, I fared much better in rigorous and challenging classes like real analysis or differential topology. To do well in elementary calculus one has to seemingly practice integration techniques in various permutations for hours, and honestly for what future purpose I cannot say.

EDIT: I finally understood calculus after taking introduction to real analysis, and it was amazing because for the first time all the hand-waving disappeared and could be replaced with rock-solid arguments and increasing levels of abstraction (starting from the very definition of what the real numbers are). This is also important because functions can get very pathological[0][1][2]

[0] https://en.wikipedia.org/wiki/Weierstrass_function (continuous everywhere but differentiable nowhere)

[1] https://en.wikipedia.org/wiki/Cantor_function (derivative is zero almost everywhere but f(x) goes from 0 to 1)

[2] https://en.wikipedia.org/wiki/Thomae's_function (continuous at irrationals but discontinuous on rationals)


My college roommate called this "Blue Collar Mathematics."

What in particular was being referred to as blue collar and what's the analogy?

I quite like math, but I hated the college-level calc series. I had to struggle to just pull of a B average, even though I thoroughly understood the concepts. I went on to apply calculus and diff eqs in higher level classes quite successfully, where the rote memorization isn't the point.

But those weed-out classes test a bunch of arcane mechanics and memorized formulas and transformations in the most intentionally obtuse exam questions possible. If you don't know the "one weird trick" you're kind of screwed.

Math weed-out classes are a lot like the tech interview problem, but for STEM majors.


And the worst is that when you apply the "one weird trick" unexpectedly, you thoroughly confuse the graders. (In this particular case, it was an integration problem in which I used a substitution that wasn't taught by the professor.)

It's funny because while I don't consider myself "good at math", I'd always learned math by learning the fundamentals, and deriving what I need at exam time, since I find memorizing leads me to (a) go insane with boredom and (b) if I don't understand what I'm memorizing, I might apply it incorrectly without it "looking wrong".

Calc II was a class where it just wasn't possible because there was too much to derive on any given exam. It took me 1/2 way through the course to course-correct and make flashcards and such nonsense. Unfortunately that was the last (non-discrete) math class I took, so I never discovered what happens next.


It's interesting that these classes have a particular character to them, and they are also coincidentally the core math "service" courses for engineering, the hard sciences, and pre-med students. The fun isn't allowed to begin until those kids are gone. Also, the accreditation requirements for those disciplines makes it very hard to change the lower level math curriculum.

To help my kids stay interested in math, I offered them the following promise: "Suspend your judgement until you get a chance to do proofs, because proofs are when math comes alive." One of my kids became a math major.


Ha, I became a math major accidentally when I happened to take a proof based course and math finally came alive for me.

I am guessing that people with an aptitude for math ,such as top physicists and mathematicians, just breeze through calc 2. They don't study more than everyone else but rather it just clicks faster.

Yes, I think you’re right. As a mediocre mathematician, I don’t know of anyone in my good-but-not-top PhD program that struggled with computational math. This is the feeling that I got from their relaxed attitude towards TAing those classes. My classmates also had close to 4.0 gpas in undergrad.

Calculus 3 is probably what you mean. calculus 1 does not cover integrals that much beyond some of the basic techniques. The handwaving typically makes the class easier instead of harder. I find it hard to belive that someone who struggles with intro calculus will underderstand it by starting with elliptic functions.

There are different course names in common use for the various divisions of the calculus curriculum. There's no standard, so quibbling over course titles is kind of empty.

Not sure if the author's edit changed which calculus he's talking about, but at the time of this posting, they say Calculus 2, which meshes with my experience.

Lots of schools break up calculus in different ways, and that's fine. My school (and the schools of lots of people I know) break calculus up into calc 1, which is limits, differentials, and a toe dipping into integrals. Calc 2 is the 8 or so different tools for integrating progressively more difficult integrals. Calc 3 is multivariate calculus.

Lots of people have a very difficult time with calc 2. It feels very plodding- calc 1 and calc 3 (and diff eq and lin alg...) felt like I was learning new insight every week, calc 2 just felt like memorizing new vocabulary words. It wasn't just that it was hard, it was that it was hard and boring.

(obviously if a school breaks calc up differently your experiences will probably be different)


AP classes have caused most US collages to split the material in similar ways between in Calc I, II, and III.

What's between Calc 1 and 3??

> I finally understood calculus after taking introduction to real analysis,

Real analysis is the version of these things taught to math students, rather than the (often mostly service) version that is taught for other programs (engineering, physics, etc.) that need calculus. It is unfortunate that many programs are structured so you can't even see this before surviving the standard 1st year calc progression, especially at large universities.

How math-oriented or not a particular program is varies obviously, but it's pretty common to see this distinction. When I was an undergraduate, entry to the honors math program ignored all calculus classes and results entirely (if I recall correctly, it was based on having a 1st class standing in linear and modern algebra courses). I think was entirely possible to complete a math major with little or no calculus at all.


What finally made calculus click for me was my class in Numerical Mathematics where I actually wrote programs to take derivatives. Also because of that course, functions became familiar, fun, and easy to rearrange. It was life changing.

going between discrete and continuous really helped me too. i think such a thing, using computers even, could do wonders for ug math education.

some kind of awesome integrated class where kids work with robot toy cars comes to mind as an interesting way it could be presented. (start by measuring their behavior and collecting data, computing crude integrals on the computer, moving into analysis using the reals)


raises hand

I started freshman year intending to major in math, started with Calc 3. When the average grade among my classmates on the first test was a 56/100, curved of course, I knew something had to give. This was not the fun math I knew from before. A+ student up until this moment.


Integral problems is to math as Leetcode is to software engineering.

Very similar experience for a lot of math programs in France. For about a year we did a lot of repetitive and uninteresting stuff (integration by parts, compute Taylor series on the whiteboard, ...).

The only concept from that time I used in my day job is the binomial coefficients. Yet I don't regret taking the class, in some inexplicable way I feel it has made be better (at what I have no idea).


> some inexplicable way I feel it has made be better

The only thing I really "learned" from studying calculus that I actually apply to the real-world is the ability to slow down and take each part individually. When I first took calculus, I tried to rush through the problems and inevitably dropped something important. It wasn't until I started forcing myself to write down each step (even if I thought I could do it in my head) that I started actually getting the right answers, and knowing that I actually had.


i always found it funny how much time was spent drilling techniques for different integral types or doing transforms for derivatives, yet the most important idea: the continuous nature of the reals and why this is the rug that pulls it all together, and the delta-epsilon definition of the limit, only saw about a grand total of 3 minutes of hand waving and an optional problem on one homework.

ug calculus was about having algebraic/trig manipulations memorized along with a table of transforms and a handful of tricks; where the actually beautiful ideas that if we use an infinitely "elastic" representation of numbers, we can solve hard approximation problems both correctly and easily- get totally glossed over.

physics has the same problem. basic physics without calculus is just a bunch of rote memorization. the idea that such a thing is taught and that is somehow "easier" is nuts. they should be taught together, as they were developed, as many of the expressions given to undergrads in physics are simply definitions of integrals and derivatives applied a few times.

no student in calculus should ever be wondering what the constant is for in a computed integral and no student in physics should wonder where the constants come from in equations of motion. (there should be no equations of motion, just definitions in integral/derivative form and definitions of integrals and derivatives)


As someone who has taught intro calculus a few times, one reason for the emphasis on computational techniques is simply that 90% of the students in such a class do not care/are not capable of grasping the epsilon-delta definition of the limit.

Solution: spend more time on epsilon-delta so that students have time to wrap their minds around the idea. But I think the engineering departments would complain that the students who we send on to them cannot do basic computations. Also students would complain that we spend too much time on theory and not enough on application. There are probably other reasons that someone more experienced would know about.


I tutored calc too, but I had an unanswered complaint about the epsilon/delta definition of a limit - I found it to be circular. There was no impact on the computations required by the class, but the definition seemed rather circular, as it presupposes the reality of the infinity implicit in induction. Over time I recognized that my intuition was rebelling against unphysical math concepts. I was a physics major and so I was content to hand-wave my own concerns away as a merely philosophical problem, since you could always pick a sufficiently large number of induction iterations to satisfy any practical need. But yeah, it left a bad taste in my mouth that no-one seemed to care about this issue. (Oddly the use of induction for similar things, like Cantor's diagonal argument, didn't bother me at all. It was specifically epsilon delta that did, and still does, bother me.)

The epsilon delta definition of limit is not circular and there is no infinity anywhere in it. Are you referring to the limit as something goes to +infinity on the real line? That's just bookeeping shorthand for some quantifiers over bounded numbers.

I'm genuinely curious as to what you were getting hung up on, because odds are you learned the wrong definitions or somehow changed the definition to something that is incorrect, and then discovered the error later on. It's a fairly straightforward definition if you can do the proper bookkeeping over the quantifiers, and is designed to not have anything to do with infinities. But many people struggle with quantifiers.


this is the point. they do not stress in beginning calculus that continuity is actually a profound property of a mathematical construction (the reals) that we use for modeling physical phenomena.

it's not the delta-epsilon definition that is slightly weird, it's what it means to be truly continuous that is. (uncountable infinity, or as i like to call it, endless zooming everywhere)

once that is clear, the delta-epsilon definition becomes obvious and clear.


Continuity is just a formal property. I'm not sure if it is "profound", that's more of an emotional response, which is very personal.

Mathematicians use words like "deep" to describe properties that are useful in many branches of math. Is continuity deep? I think what's deep is the notion of compactness and the essence of compactness is open covers and finite subcovers, so the deep idea really is about epsilons and deltas and not handwaving about "what it really means" to be continuous.

For ruminations on what things "truly mean", you are not going to find that in a math book. Perhaps a philosophy book would better scratch that itch for you. Math is self-contained and provides its own meaning in terms of the relationships between formal properties and precise answers to precise questions. Some people truly love exploring the relationships between formal properties and discovering the answers to difficult questions that reveal new logical structures. If that is not interesting enough for you, perhaps you will get more satisfaction studying a different field.


Any statement about infinity is not something you can check directly, by hypothesis. The powerful influence of social proof cannot and should not be ignored. There are too many smart people who don't know anything about the foundations of mathematics, and believe that there aren't any consistency issues.

My point is that as a practical matter, they're right. Calculus works really well. But consider for the moment the question of why you are convinced that induction is valid. There is no proof of induction's validity. The epsilon delta thing is a definition, not a proof, and serves, effectively, to invert infinity into an infinitesimal. I am on firm ground with these statements, like it or not.

I agree there's little to be gained examining what things really mean when it comes to meaning itself - it is always going to be circular. I believe that the foundations of math lay primarily in its utility, and secondarily in the social fabric of experts (which may or may not be related to utility). The eerie, beautiful way in which math describes the world is fundamentally a human phenomena, and it's based on aesthetics, not logic. I suppose my objection is simply that one shouldn't go around thinking that the epsilon delta definition removes the ambiguity and messiness that actually underlies mathematics.


In an effort to understand what you mean by 'induction' and 'infinity', can you explain where induction comes up in the following standard definition of a limit:

"f has a limit L at x" means exactly that

"forAll epsilon>0 thereExists delta>0 such that if |x-y|<delta then |f(y)-L|<epsilon."


"forAll"

Okay, I see a link to 'infinity' there, because there are infinitely many reals. Is that what you're going for?

If so, the link seems irrelevant and in particular I don't see how this leads to circularity in the definition. Can you explain this further?


> But consider for the moment the question of why you are convinced that induction is valid. There is no proof of induction's validity. The epsilon delta thing is a definition, not a proof, and serves, effectively, to invert infinity into an infinitesimal. I am on firm ground with these statements, like it or not.

I'm afraid these statements are a mishmash of some correct and incorrect statements, and a logical argument like that is considered incorrect.

* Yes, the definition of limit is a definition.

* The definition of limit has nothing to do with induction or infinity. I'm honestly baffled why these three distinct concepts are being conflated.

* For well ordered sets, induction is just reductio ad absurdum in which you assume the smallest element does not satisfy a condition and then show this to be incorrect because the next smallest element must meet the condition and it's satisfication means the next larger element must meet it as well. There is a valid question as to whether every set can be well-ordered, which is an axiom, but for countable sets, which is where virtually all induction arguments are used, no axiom of choice is needed to use induction-style arguments.

* The statement "to invert infinity into an infinitesimal." is gibberish.


> the very definition of what the real numbers are

I wonder what that was. (In my world it was just a bunch of axioms.)


It was given axiomatically, with classical set theory as the foundation. Define the reals as a complete, ordered field. In fact it can also be proven that any two complete ordered fields are isomorphic. From this you can prove various cancellative, associative and distributive laws and obvious things like 0 < 1. See[0] for a readable Coq formalization.

[0] https://www.cs.umd.edu/~rrand/vqc/Real.html


There are also set-theoretic constructions of the Reals that satisfy the axioms (e.g. Dedekind cuts)

At least when I went to undergrad, calculus was a university requirement, not a major requirement. Like English 101, except it is of course taught by math department faculty just as English 101 is taught by English faculty. Everyone had to take it or test out of it.

For actual math requirements, you started with real and complex analysis, abstract algebra, geometry and topology, and then some applied math classes such as partial differential equations, or numerical methods. There was also a requirement for probability and statistics.

One way you can tell the difference between a general requirement class and a major class is the size of the classroom and the majors taking the course. If you are in an auditorium with 300 freshmen taught by a TA and almost no one else in that course is a math major, then you are looking at a university requirement rather than a college requirement.

University requirements are not intended to weed anyone out, that would be contrary to the goals of the university. They should be doable by all who are admitted. When I went to grad school, I had the pleasure of teaching some of these calculus classes, and no one considered this to be a weed out class or a math major class. All the math majors we had tested out of calculus in high school, and most of our students had humanities majors (as the STEM students also tended to test out of it). Giving those humanity majors lots of tricky problems in order to try to weed them out from their own majors wouldn't make any sense.

Moreover the key skill in being a math major is the ability to do proofs. So the weed out classes tend to be real analysis or abstract algebra, as these are the classes where students first do proofs. As there are traditionally no proofs in calculus classes (the books may provide proofs, but you are not tested in your ability to prove theorems, but in your ability to calculate). Thus it wouldn't be a good weed out class for math majors even if it wasn't a general requirement class taken by all majors.


What university is this where a humanities major is required to learn and apply integration rules?

undergrad was Arizona State. Yes, having a college education requires knowing basic stuff like how to write a college essay or how to find the area under a curve. At that time, it was grouped by Numeracy or Literacy Requirements, so calculus met the N1 requirement and you could satisfy your L1 with English 101. Of course you could take more advanced classes as well if you wanted, but there was no credit for taking high school math classes to satisfy the university Numeracy requirements. So no trig or pre-calc would cut it. There were also social studies requirements, etc. The idea is that a "liberal arts education" requires these. And of course virtually all math majors would already have tested out of them, even in big state schools.

I think your claim that "calculus was a university requirement" doesn't align with what I'm reading in their degree requirements, which is "MA and CS: Mathematical Studies (combined six credit hours)"

> This core area has two categories. Mathematics (MA) is the acquisition of essential skills in basic mathematics. Computer/statistics/quantitative applications (CS) applies mathematical reasoning and requires students to complete a course in either the use of statistics and quantitative analysis or the use of a computer to assist in serious analytical math work.

> This requirement has two parts: At least three credit hours must be selected from courses designated MA and at least three credit hours must be selected from courses designated CS, and all students are expected to fulfill the MA requirement by the time they accumulate 30 credit hours in residence at ASU.

Sounds like you could take calculus to satisfy the Math requirement, but you could decide to do other Maths instead?

https://catalog.asu.edu/ug_gsr


Since many other countries fund their universities differently, I imagine weeder courses would be less unnecessary there. As such, is calculus taught differently there? If intuition were valued more in math instruction, I think many US students would take more math courses, as well as more technical courses that employ higher math and math-based insights.

I was and am bad at integration problems like this. I find them scary. And mysterious. Somehow I associate these with engineering type classes, but that may not be generally true.

On the other hand, I really enjoyed and did well in classes like real analysis, functional analysis and measure theory, even though I am not a mathematician.

But those classes are fun. Say, the construction of measures and Lebesgue type integrals makes sense to me.

Integration by parts? And this stuff? I feel like a stupid monkey. It's weird.

For a real mathematician, both of these types of skills are probably important. I wish I could do integrals better.


They are just games with symbols. But yes, some moves are scary. The moves that scare me are those whacky out-of-nowhere algebraic additions that lead to a simplification later. They are scary because somehow someone anticipated that one addition out of an infinity of possibilities would be useful "down the road" in the computation. (This is done a couple of times in the OP's integral.)

I think it is interesting that some notions in this thread boil down to the idea that this ain't real math like, say, real analysis.

In my limited view, many proofs are easier. Especially easy ones that are (if that makes sense) more sequential in the required thinking.

I think there is a lot of cleverness in these integral calculations, a lot of creativity to, as you say, come up with some seemingly nonsensical transformation that down the road turns out useful.

Perhaps it's because I am so bad at it, but I find people who does this easily very impressive.

I feel the same way for differential equations, where, incidentally, I also suck.

I wonder whether anyone ever did a study on the cognitive skill "dimensions" in maths.


It worked for this one because we knew the answer beforehand and the best approach. Its not like we can generalize this. Change some of the terms and poof unsolvable

That's true in general for all integrals. Method A solves this formidable looking integral nicely and simply. Make one small change, method A completely fails and now you'd need method B to solve it, which is not at all related to method A.

Funny story, I was a huge fan boi of Feynman's and read everything I could about him. I used this technique to integrate the "extra credit" problem on my freshman final, which I turned in with about 20 minutes to spare, and the professor accused me of cheating by knowing the answer ahead of time. When I showed him the steps and explained the origins he allowed that perhaps I hadn't cheated, but was disappointed I had used a technique that wasn't taught in class so that was somewhat unfair to the other students.

Not my best professor.


How is knowing the answer 'cheating' by any stretch of that word?

The assertion was that I had somehow acquired the answers to the test before I took the test. The evidence for that, flimsy as it was, was that I finished the test "too quickly to have done the work."

Getting the answers before the test was the alleged cheating.


What was frustrating to me, in the days before this sort of thing was all over YouTube, was reading that Feynman had a magical technique in biographies and so on, but not being able to find any reference to what the damn trick was.

> When I showed him the steps and explained the origins

Did you not put the steps on the paper you submitted?


Not enough apparently.

But to be fair, as a freshman in a college calculus class that was essentially reviewing what I had already done in high school I was kind of an asshole. I skipped a lot of his classes and often did the formulaic questions in my head and just wrote down the answer.

I don't recommend this approach.


I've only ever been accused of cheating once[1]; which is perhaps surprising given my poor study habits and high test scores. It probably helps that I tend towards a fairly deliberate pace on my tests so I was rarely done super-early.

1: That one time was not even in school; when I graduated I applied to OCS, and the recruiter told me point-blank that his superiors thought I cheated on the test. It was apparently the highest score anyone at this particular recruiting site had ever seen, and my grades were mediocre (see above about poor study habits).


What's the difference between "an arbitrary constant" and "a variable"?

The definition is arbitary. And it varies. ;-)

Both are symbols. The difference is kind of subjective, and has to do with how you treat the symbol. Do you just carry it through your derivation, or are you interested in what happens when you feed it specific values?

But I believe your objection is valid. And to be honest I got all the way through a college math major by just treating everything as symbol manipulation.

Caring about numerical values was for my other major, physics.


In programmer's terms, an (arbitrary) constant is a parameter. (A variable is, well, a variable.)

Past related threads:

Differentiation Under Integral Sign (2015) [pdf] - https://news.ycombinator.com/item?id=26123750 - Feb 2021 (59 comments)

Feynman's Integral Trick - https://news.ycombinator.com/item?id=26040353 - Feb 2021 (6 comments)

Richard Feynman's Integral Trick - https://news.ycombinator.com/item?id=21055728 - Sept 2019 (8 comments)

Richard Feynman's Integral Trick - https://news.ycombinator.com/item?id=17558752 - July 2018 (35 comments)


I don't really understand the part from how did the author jumps from -pi*(1+a^2)/(1-a^2) to df/da = 2pi/a. Anyone knows how the author did it?

Notice that it is actually df/da = (pi/a) - (1/a) (1-a^2)/(1+a^2) int_0^pi (1 - (1-a^2) / (1 + a^2 + 2 a cosx) dx.

It is that right integral that is equal to -pi (1+a^2)/(1-a^2).

So when you add you get:

pi/a - 1/a (1-a^2)/(1+a^2) * (-pi (1+a^2)/(1-a^2)) = pi /a - (-pi/a) = 2 pi/a


In Mathematica 12.3: Integrate[Log[1 - 2 a Cos[x] + a^2], {x, 0, Pi}, Assumptions -> Abs[a] >= 1 && a \[Element] Reals] gives the correct answer -\[Pi] Log[1/a^2]

The first integral can be solved replacing the integrand with a series of sort. Notice that the expression inside the logarithm has zeros at

    $\alpha = e^{\pm ix}$
So we can rewrite the function we're integrating as

    $log((\alpha - e^{ix})(\alpha - e^{-ix}))$
which is just

    $2log(\alpha) + log(1 - \frac{e^{ix}}{\alpha}) + log(1 - \frac{e^{-ix}}{\alpha})$
Using

    $log(1 - x) = -\sum_{n=1}^{\infty} \frac{x^n}{n}$
We get

    $2log(\alpha) - \sum\frac{e^{inx}}{n\alpha^{n}}-\sum\frac{e^{-inx}}{n\alpha^{n}}$
which is just

   $2log(\alpha) -2\sum \frac{cos(nx)}{n\alpha^n}$
The integral of the second half of this involves a $sin(nx)$ term which will evaluate to zero for all values of \alpha at 0 and \pi.

Leaving just the integral of $2log(\alpha)$ which is just $2\pi log(\alpha)$


Actually, this method was already used by Leibniz, although it was not that common at Feynman's time. https://en.wikipedia.org/wiki/Leibniz_integral_rule

Yes, that's the first sentence of the main article body.

> Today’s article is going to discuss an obscure but powerful integration technique most commonly known as differentiation under the integral sign, but occasionally referred to as “Feynman’s technique” due to his popularization of this technique in his book, and properly known as the Leibniz Integral Rule.


the funny thing is that the calculation in this article misses the point, especially in the Feynman context. First, beyond all trickery, the log(alpha) answer might suggest that something bad happens at alpha=0 . What makes this integral interesting is that it is equal to zero identically for alpha<1 .

The reason, of course, is that this integral is not randomly chosen -- it represents the two-dimensional coulomb potential (log(r)) of the sphere (circle) of radius 1 at distance alpha from the center. By when point alpha is inside the circle , the potential is constant (or zero -- no force) . When alpha is outside, the potential is log(r) as if all the mass of a circle is at its center. The expression under the log in the integral is just (square of ) the distance between the point alpha and point on a unit circle.

beyond tricks -- the physical reason for the singular behavior of this integral is gauss theorem for coulomb potential . so no magic.


A think I've been wondering is why is integration harder than differentiation. The latter can be done almost mechanically, as long as your primitive functions are "nice", but the former often requires cleverness like what's show in the article.

I mean, sure, we have simpler rules for dfferentiation, but _why_?

I sometimes wonder if it's differentiation is P and integration is NP (for the restricted case of functions where the primitives are "nice")


> I sometimes wonder if it's differentiation is P and integration is NP (for the restricted case of functions where the primitives are "nice")

It's something pretty close to this. Almost all mathematical expressions can't be integrated (up to the reader to formalise the definitions here). That is not quite the same claim as saying that the best method for solving an arbitrary integral is trial and error by systematically differentiating other expressions until you find the antiderivative.


Differentiation is a function that takes a function and spits out a different one. (This may fail at points, but wherever the function is smooth enough, you get a corresponding point on the derivative). It is relatively simple to define, if somewhat laborious to make precise (real analysis etc).

The most straightforward definition of indefinite integration on the other hand is “the inverse of differentiation”. Like a lot of functions, inverses are much more complicated. Even in the case of matrix-vector multiplication, the inverse function (the inverse matrix) has an expression in terms of determinants in the original entries of the matrix - much more complicated than the original problem.

So I think that it’s the more simple theme of: even if the function is easy to describe, its inverse may be extremely complicated.


You could always say integration is a function that takes a function and spits out another one, and differentiation is the inverse of integration. There's no intrinsic reason why an inverse is anymore difficult or why the derivative is the "base" and integration is the inverse.

The more abstract reason differentiation is easier than integration is that because differentiation is a local operation, given a family of functions differentiation will be closed within that family. That is the derivative of an elementary function is an elementary function, the derivative of a rational function is a rational function, the derivative of a function f is a function f' that is within the same family of f and will share almost all of the same properties as f.

Integration on the other hand does not have this property, and in fact it's very difficult to find family of functions that are closed under integration. The integral of a rational function may not be rational, the integral of an elementary function may not be elementary. Integration is a global operation whose solution may require the construction of a new family of functions whose properties are entirely unlike the function being integrated.

It's like how squaring a number is usually closed within that number system, but taking the square root of a number could require creating entirely new number systems, whether it's real numbers or complex numbers (at which point the square root is closed). With integration, the issue is that it never ends up being closed, you can use integration on a family of functions to jump out of that family and construct entirely new families of functions seemingly without end.

In a way, differentiation produces functions whose properties are a subset of the original function, but integration produces functions whose properties expand upon the original function.


Interesting example they chose there. It requires no fewer than three deus ex machinas: apply this trick, then apply this very particular substitution, then find another substitution. If you encountered this integral in the wild without knowing whether/how it can be solved, yes you might have tried the trick, but you wouldn't realize that it had gotten you anywhere. Would you continue on and find the just-so substitutions, or would you backtrack and try something else?

It's great that we have the tricks we have, but at the same time most nontrivial integrals are just impenetrable regardless. Any demonstration of integration techniques you find will be on an integrand that is amenable to these techniques, and will only show the straight path to the solution, not the process of finding that path. I hate integrals!


Legal | privacy