your analogy between comparing fractional calculus to integer calculus and interpreting e^ipi as repeated multiplications of e is perfect
THE MAN WHO INVENTED THIS WAS A PROFESSOR AT MY SCHOol, he passed away but he was a really cool guy(he ran ultramarathons until he died!) and he was still teaching too. Thomas Osler was his name, really fascinating guy.
@[3:20]: Notice how the nth derivative of xª is (a!/(a-n)!)x⁽ᵃ⁻ⁿ⁾, by extending that formula to all non-integer n, we get (3!/(2.5)!)x²·⁵, and we can apply the interpolated factorial to obtain our coefficient, which most calculators actually use; that coefficient is about 1.8054066673528201182338542449945.
Half the derivative, twice the fun
Wonderful video! Is it possible to take complex-valued derivatives? What would they mean?
Next stop: imaginary derivatives
Here's a signal processing perspective: The derivative operator is a linear filter with frequency response given by the identity function. To find the half derivative, simply use a filter whose frequency response is the square root function. Of course, the tricky part is to define Fourier transforms of arbitrary functions in a meaningful way so one can apply the frequency response. I guess one can use windowed versions of the functions, then let the window width go towards infinity.
At 7:34, really the sqrt(pi) on the inside and the outside combine into a full pi in the denominator, which then would presumably cancel with a pi in the numerator generated by a trig substitution required to handle (t-x)^(-1/2). Trig subs love to happen when you have simple square roots of the integration variable in the denominator, and where there's trig, there's pi.
Though I don't fully understand the whole video yet, the last sentence is enlightening: "...a deeper appreciation for the clever techniques mathematicians use to extend concepts to domains where they at first don't seem applicable, and the fascinating things that can result ---- a process that is very much a part of the spirit of modern math."
I'm happy the algorithm recommended this awesome video. It's like discovering another dimention. It's that moment you realize something absolutely new and your brain celebrates it like a new birthday.
Excellent video! Liked it a lot. However, there is one mistake in the video that must be addressed: we DO know what fractional derivatives mean! It is simply cannot be described as a "tangent line" or an "area under the curve". Think about it that way: 0-degree derivation means keeping the function as it is. 1-degree derivation means looking at CHANGES of the function. A fractional degree derivation means how much MEMORY of the original function we want to keep. For example, 1/2-degree derivatives mean we want to balance equally between remembering the original function to looking at its changes. 1/3-degree derivative means we give 1/3 of the weight to the original function and 2/3 of the weight to changes. This is extremely helpful in many fields. Personally, I use it when researching the stock exchange. If I look at a stock price, 2 things interest me: the price of the stock and how it changes over time. In the end, predicting one of them will help me to predict the other. It is have been proven that many times predicting a fractional derivative of the stock price is easier than predicting the full 1-degree derivative or the 0-degree derivative (the actual stock price)! So that's my point: we know what fractional derivatives mean - it's the balance between looking at the changes of the function to the function as it is.
2:54 Here, I found the formula c(a) = a!/(a-½)! for the coefficient, where a is the exponent and the factorial is expressed in terms of the Gamma function: a! = Г(a+1). Also it can be extended by replacing ½ with any fraction or number you want.
When I saw the factorial of -0.5, my mind immediately exploded as I was like what is gamma doing here. So any stats major, someone specializing in Enterprise Risk Management or an Actuary (which category I'm in) would immediately recognize the gamma distribution. I was not expecting to learn the proof to one of the most useful "probability density functions". Amazing
The derivative and integral operators can be seen as smoothing and non-linear frequency scaling. Take the FT of your derivegral and you will essentially get a spectrum modification. For integer parameter the frequency "lines" up and so it enables constructive and destructive interference to properly take place canceling all the non-linearity. That is, you have what is essentially a convolution and then derivative and when you take the FT of such a thing you end up with a power scaling relationship w^(p-a) modifying the original spectrum. The point here that the non-local behavior is due to the process actually working in the frequency domain and it just simplifies for the integer case. The interpolation requires certain constraints at integer values so the line up with our traditional usage... hence the derivegral is a generalization that simplifies to our basic operators. It's just one form of interpolation as there can be no absolute generalization since any generalization can work. Hence the "interpretation" of some fractional derivative is going to simply be the specific mechanism in which the transform was designed.
The options for fractional derivatives remind me of Euclid's 5th axiom of geometry. Euclid hated that he had to explicitly state that parallel lines never intersect, but what he didn't realize is that this was required differentiate flat-plane geometry from hyperbolic and elliptic geometries. Had mathematicians discovered a system isoomorphic (is that the right word?) to his first 4 axioms but not in the context of geometry, then we would similarly find ourselves with multiple options for extending the theory.
What amazes me most of all about fractional derivatives is that they have actual real world applications in physics.
Wow, this felt like my graduate level math class wherein we define stuff that we don't really understand and point out the different strange properties it satisfies.
Wow, this is beautiful. Really demonstrates how mathematicians are able to extend concepts beyond their original domain.. also shows how doing so can eliminate structure. Here we see the geometric meaning (kind of) disappears..
I first learned about fractional derivatives watching Dr. Peyam's videos a few years ago. Since then I've been working on applying more than just square roots to differential operators. From what I've seen, they're just a really bad class of functions to do this with, because all the fractional monomials have these branches and asymptotes, it's all very messy. Exponential function of the derivative operator? Classic, that's your shift operator. Reciprocal of a linear function of the derivative? Generalized Laplace transform! It all works surprisingly well. Using just the shift operator definition and regular derivatives, then some fairly predictable rules for how to translate things into and out of integrals, you can work up a healthy repertoire of functions applied to derivatives. My crown jewel so far in all of this was uncovering a super secret identity! It's sort of like the mother of all generalizations of the product rule, way beyond the generalized Leibniz rule. As symbols, [f(D_x)] (g(x) * y(x)) = [ [g(D_z + s)]_{z=D_x} (f(z)) ]_{s=x} y(x) In words, for any function-of-derivative 'f' taken of a product of functions, one of those functions may be taken out, and applied as a function-of-derivative of 'f.' There is the quirk in that, given away by the use of a dummy variable 's;' this is to ensure the operator is well-defined, because you really shouldn't mix the variable you differentiate with respect to in the operator itself, as it may become unclear with these highly non-linear function-of-derivative operators what's differentiated when. Technically there would be no notational problem, and I've over-notated the issue especially by using square brackets to denote these function-of-derivative operators, but alas. Within these studies, I've come closer and closer to difficult problems of little importance, as well as stumbled upon the triviality of finding an operator whose eigenvalues are the zeros of the Riemann Zeta function. Alas, I do not have the knowledge to determine or construct a space for such an operator to also be self-adjoint. It's not an easy topic to just dive into, especially without having taken a class beyond differential equations. One of those curious problems of little importance is finding a non-trivial differential (that is, a function-of-derivative) equation whose solutions include the gamma function. I've actually gotten really close! The equation: [e^e^-D_x]y(x) = 0 should have the gamma function as a solution. I devised this using that above identity, specialized for a particular case (I've forgotten the original derivation of how to do this) of an operator where somehow you multiply by the independent variable. You see, this is actually impossible, because all of these 'function-of-derivative' operators (perhaps excluding peculiar cases of non-meromorphic functions, see what I said before?) are completely linear. They commute with each other. This is not conducive to having an operator where you put in a function of and get out x times that function. The problem is that this operator does not commute with the derivative: x * [D_x] f(x) = xf'(x) [D_x] (xf(x)) = xf'(x) + f(x) This is one of the reasons I don't like mixing the independent variable in the function-of-derivative operator, because it breaks commutativity. But I just said I did that. How? Magic, basically. I cheat by using a strange trick (which, as I mentioned before, I don't remember off-hand, but iirc it is based on that identity I presented at the beginning of this comment) where, given an operator whose 0-eigenfunction (eigenfunction of eigenvalue 0) is 'f,' I can find a new operator whose 0-eigenfunction is x*f. By arranging the terms right, the logical next step was to try and apply this to the functional equation for the Gamma function. I forget exactly, but iirc this should yield [e^e^-D_x]. This almost works! There's a very non-rigorous but highly conclusive way of getting from this operator to the gamma function using a certain method (solving these kinds of DEs is often easy because it's just a sum of exponentials whose exponential coefficients are the zeroes of the operator (A method you may be familiar with for LDEs, which can be proven to generalize as such). The trouble is that e^e^-z has no (finite) zeroes, so you have to use a really tedious method that uses integration of a manipulation of what would normally be called the characteristic function, but is identically here the function that the derivative operator is taken of, and it's a hassle.) Except, it's not this operator, it's its evil twin that's out by a sign error. I don't remember exactly how it goes, but you can see it work flawlessly for that evil twin, and diverge for the one you arrive at correctly for that method I mentioned earlier. This frustrating issue has had me scour every step of what I've explained for a sign error, to no avail. If you don't want to work through all of this stuff that I have (inadequately) explained to see that the method really does almost work, simply take the integral definition of the gamma function, and do a particular u-substitution, I think it might be u=e^-t, or maybe it was t=e^-u, and watch as you get this peculiar double exponential appearing. This is exactly what you'd expect for a function the solution to this function-of-derivative equation, and it is what you get by applying this integral of characteristic function method to that evil twin operator. On the other side of things, trying to evaluate these extremely strange function-of-derivative operators is hardly possible. Actually, it's fairly straightforward to do it for any function-of-derivative where the function has a definite integral representation, by which I actually mean where the independent variable isn't in the bounds. (The variable being in the bounds would obviously be pointless, because you replace the variable with the differential operator, and I have absolutely no definition for an "integral from 3 to the derivative operator"!) Many functions have such a representation, like the Gamma Function (a coincidence from earlier, that has no application as yet to the earlier problem) to take Gamma(D_x), the Riemann Zeta function to take Zeta(D_x) (this is related to but not sufficient for what I mentioned earlier about an operator whose eigen_values_ are the zeroes of this function), and indeed 1/(s-D_x) is equivalent to the laplace transform, but with the original independent variable still there. To give you a taste of how most of this works, I'll derive that last one, because it's a lot of fun: Notice that the integral from 0 to infinity of a negative exponential is the negative reciprocal of it's exponential coefficient, that is: int_{0, inf} e^-zt dt = 1/z We can use this as an integral definition for the function 1/z. If we alter this nifty function 1/z, we can get the rather versatile function 1/(s-z). Looking back this yields: int_{0, inf} e^(z-s)t dt = 1/(s-z) Replacing z with a differential operator D_x, we get [ int_{0, inf} e^(D_x-s)t dt ] = [ 1/(s-D_x) ] which is a perfectly typical construction. Notice the square brackets [ ] on the outside of the integral, which denotes that the integral is taken first, then function-of-differentiation is evaluated. We can relatively freely move those brackets inside for use as a definition for the simple reason that I've not bothered to make rigorous when you can't do that beyond just "whenever it would be okay for a perfectly linear operator." When we do this, we should separate the exponential terms out to get a better look: int_{0, inf} e^-st * [e^tD_x] dt = [ 1/(s-D_x) ] Now, as a function-of-derivative operator, I would leave it here, but to see why this is a generalized laplace transform, we should test it out on an arbitrary function 'f.' [1/(s-D_x)] (f(x)) = int_{0, inf} e^-st * [e^tD_x]f(x) dt The coolest part of all this study is how commonplace the fact is that the exponential function of a derivative is actually the shift operator, which otherwise is relegated to the characteristic functions in the niche subject of Delay Differential Equations. = int_{0, inf} e^-st * f(x+t) dt This is our familiar Laplace Transform. Except... isn't it supposed to be just f(t), not f(x+t)? Hehehe, indeed. Quirky, eh? (You can show this works as a definition by applying [s-D_x] to one of these generalized Laplace Transforms on your favorite functions. It's cool! And has effective but disappointingly limited and tedious applications to solving typical constant-coefficient LDEs. I hope this small youtube comment made some maths enthusiasts a little more intrigued in the peculiar side of calculus.
@SurfinScientist