One of the goals of science is to explain the world.
* While computational tools will symbolically solve a lot of integrals, they won't solve them all. Resorting to numerics often means you have lose some understanding along the way, because you no longer have a closed form expression to analyze.
* One general strategy in Physics is to take a complicated expression and make different sets of simplifying assumptions to reduce it to simpler forms. This adds explanation to your model because you understand how the system is said to behave under different limitating scenarios. But if you are not adept at manipulating complicated expressions, you won't be able to use the strategy fully. Computer solvers are really bad at writing mathematical expressions in the nicest way possible so that the simplifying assumptions pop out naturally.
Full disclosure: I am a physicist, who uses Mathematica quite a lot to solve various expressions (but I know the limitations of the tool).
> "physical world" is where the symbolic approach completely breaks down.
It does not "break down" it just becomes intractable in certain cases.
One needs to be able to solve problems that have all but the most essential details stripped out in order to develop a sense of how physical law actually works. Many times that is even "good enough" to get to a solution.
The best way to do that is through analytic methods, which give not only "an answer" but also tell you important features of the answer. These analytic solutions have "handles" you can use to ask "what-if" questions -- eg zero's in the denominator to indicate poles, behavior of the system as you take certain limits, geometric aspects such as symmetry, patterns in recurrence relations, etc, etc, etc..
I just assume physicists are weirdos like that, they want reality to do easy and nice things, not math :)
As you see it clicks for many people, and doesn't mean much for others. Back then it was a very welcome click.
After all, it should be possible to find antiderivatives and solve ODEs without doing the dance with substitutions and change of variables, but still, we find it many magnitudes easier than just looking at it and divining the correct solution.
In physics, we already deal with the fact that many of the core equations cannot be analytically solved for more than the most basic scenarios. We've had to adapt to using approximation methods and numerical methods. This will have to be another place where we adapt to a practical way of getting results.
Big equations don't come out of the ether. Either they are derived some simpler set of underlying equations based on assumptions, or they are taken from a paper/book that did that derivation.
Usually, whoever does the derivation, or someone who wants to understand things properly, will do computations on multiple steps of the derivation from the start to the finish. A lot of these computations can be done by hand - you don't need a computer. A lot of computations should be done by hand - even if they could be done by a computer - because you only get a feel for the equations if you play with them with your hands. To quote Dirac, 'I consider that I understand an equation when I can predict the properties of its solutions, without actually solving it.' That comes from solving a lot of them by hand.
Yes, oftentimes, doing numerical or symbolic computation with a computer helps. But is the pain point of that having to type the equation into the computer. Hardly. It would be nice, but nothing ground breaking.
I realized this hard while optimizing signal processing code, integral over infinite spaces (what great example), it is game over for my level of skills, sadly, I was not able to even optimize code correctly without first writing optimized simplest examples possible and showing it to physicist (it is like I understood/felt way to simplify math but have no skill to actually do it), physicists quickly realized how he can rewrite math in more simpler way, the final result was code x30 faster in worst cases.
You probably don't solve quadratic equations anywhere near as much as an engineer/physicist would, and so prefer less memorization and more "being able to derive when I need it". The latter is just too time consuming to do it every time.
I'm guessing you don't recall off the top of your head what the integral of 1/sqrt(a^2-x^2) or of 1/(a^2+x^2) is (unless you happen to teach Calc II), but most physicists likely do as they encounter it often and likely just memorized it.
Luckily there are many interesting examples that can be solved. In particular the linear differential equations. Many equations can be approximated by a linear version of it.
Also, in Physics, a lot of ODE are mysteriously integrable if the variable is x instead of t. (One reason is that it's easy to measure the force/fields, but the "real" thing are the potential, so you are measuring the derivative of a hopefully nice object.)
Also a lot of the theoretical advanced stuff to prove analytical solutions and to estimate the error in the numerical integrations use the kind of stuff you learn solving the easy examples analytically.
And also historical reasons. We have less than 100 years of easy numerical integrations, and the math curriculum advance slowly. Anyway, I've seen a reduction in the coverage of the most weird stuff like the substitution ?=atan(x/2) (or something like that, I always forget the details). It's very useful for some integrals with too many sin and cos, but it's not very insightful, so it's good to offload it to Wolfram Alpha.
Well, to be useful, wouldn't you need to be able to use it? Most people outside of physicists and physics-adjacent fields are very far away from the mathematical tools to deal with these equations.
One example: Physics can sometimes involve pretty gnarly integrals. Some of the intuitiveness of physics comes from the fact that we study the cases where we can get a closed-form solution -- nice geometric shapes for example, where we can set up a nice mathematically described surface integral or whatever.
Unfortunately, engineers insist on designing devices which are neither perfect spheres nor perfectly flat planes (ridiculous!), and they might only have equations to describe how properties change in time or space. In this case, it can be easier to discretize the thing into a mesh, and use a matrix to describe how the physical phenomena at the points of that mesh relate to each other.
There aren't any, and the solution itself is uninteresting, but it's possible that techniques developed in the process of solving it will find application in solving other problems (In physics, chemistry, genetics, cryptography...)
Consider the most important function of calculus - the integral. In layman's terms, it measures the area under a graph. Okay - that's a little bit useful, if you care about the physics of moving objects (A bus is accelerating at 2 m/s^2 for 5 seconds, how far does it travel..?)
Yet, if you know how to integrate, a mountain of not-immediately obvious physics problems - say, anything that has to do with electromagnetism (Maxwell's equations) immediately become tractable.
You need to look at these techniques as being in continuity with how mathematical modeling has been generally performed by humans for a couple centuries now.
To directly explain the mechanics of what should otherwise be simple or everyday phenomena (like with three-particle orbits or the flow of water in your bathtub) often requires many equations and parameters. Since physical systems tend to be not-so-heterogeneous in structure, we can use experimental insight and exploit symmetries to reduce the number of equations and parameters up front. But this amounts to finding ways to simplify because we don't understand the precise system in play, only an analogous one.
For larger systems we have historically relied on equation solvers to do the reductions and find solutions. But there are systems for which the dimensionality cannot be easily reduced, like with language and vision. Then to be precisely predictive about these high-dimensional outcomes, we still need to compute beyond our ability to reduce the equations from the outset.
This all converges on deep learning - since in the end, it's much of the same linear algebra used by solver packages, just recursively applied to enormous high-dimensional datasets. Maybe calling it intelligence gives more agency to the process than we can stand to attribute. But in many ways it's just an extension of our usual ways of mathematical modeling before computers, but moving into so many dimensions and datapoints that it produces outcomes similar to the behavior we use our entire brain to recreate instinctually (like image recognition and language use).
Most real life integrals and differential equations are not analytically solvable anyway, so you run into tables like you do with most real life applications of logarithms.
What happens is you learn numerical approximation methods and then recreate the tables with human 'computers', like we used to do before electrical computers came into play. Or create mechanical analog computers like they did in the 1800s or greek times.
I'm not avoiding mathematics. I used to be a mathematician. The math is the whole point.
In the mechanics library that comes with the book (which I'm building on in my own work), functions can take either numerical or symbolic values. If you have a computation involving symbolic values, you can manipulate it just like you would with pencil and paper (except you can operate at a higher level of abstraction, never get writer's cramp, and never have to laboriously recopy line after line of symbols to make sure you got the right number of minus signs). If, as so often happens, you find yourself up against an intractable integral, you pass the whole thing to a numerical solver and get a number back right away. With enough calculations you can build up a qualitative understanding of the system's behavior. My understanding is that this is what mechanics people do all day, but not how mechanics is taught to newcomers.
This is my complaint about most of what I read about advanced physics. I have enough understanding of math and basic physics that it seems within reach for me to follow an equation-driven explanation, but the papers are a bit too dense for me. And I suppose I'm not interested enough to invest the time to truly understand all of the notation.
Using math to model a system instead of learning math qua math does wonders for ease of understanding. Derivates and integrals become easy if you're using them to model the relationship between position/velocity/acceleration. I don't think I really got linear algebra until using it to learn quantum computing.
I second this thinking. You may be interested in reading about Geometric Algebra[1] and Infinitesimal Calculus[2], as two other alternative frameworks for the symbolic tooling used in the physical sciences. Together with symbolic manipulation and calculation software, I also think alternative, simpler symbolic frameworks may help solve what I call the "receding shoulders climbing problem". There's a phrase attributed to Newton "If I have seen further it is by standing on the shoulders of Giants". The problem today is that the giants became so big, one can spend many years studying and still die before getting anywhere near their shoulders.
>>>For me, the hardest part of Math is the Byzantine formula. My brain just doesn't look at them and go "oh, that's how it works." Instead, I need to translate that into some nice executable code (Python, etc).
I stumbled through Calculus not really getting what it was about. Physics on the other hand, was real world, here is the formula; here is how to apply it. That made it click for me.<<<
I know exactly what you mean. I can see stuff in physics and I intuitively know what is going to happen when I take a metallic hollow piston with air inside of it and place it partially in a magnetic field. It's observation of Solenoids and other stuff that allows me to do that, but I really couldn't do the same with math, and now I am trying to do that.
* While computational tools will symbolically solve a lot of integrals, they won't solve them all. Resorting to numerics often means you have lose some understanding along the way, because you no longer have a closed form expression to analyze.
* One general strategy in Physics is to take a complicated expression and make different sets of simplifying assumptions to reduce it to simpler forms. This adds explanation to your model because you understand how the system is said to behave under different limitating scenarios. But if you are not adept at manipulating complicated expressions, you won't be able to use the strategy fully. Computer solvers are really bad at writing mathematical expressions in the nicest way possible so that the simplifying assumptions pop out naturally.
Full disclosure: I am a physicist, who uses Mathematica quite a lot to solve various expressions (but I know the limitations of the tool).
reply