Fractional Calculus and Taylor Series
Fractional calculus is the attempt to solve equations of the form $\sqrt{\frac{d}{dx}}f(x)$, where $\sqrt{\frac{d}{dx}}$ is some operator that when applied twice is equal to the derivative, and other problems in the same vein. Fractional differentiation is generalized from that idea to raising the derivative operator to an arbitrary exponent and likewise for fractional integration. The idea that derivatives and integrals can be raised to an arbitrary exponent is motivated by analogy to how repeated multiplication can be extended to exponentiation. This leads to the possibility that, just as exponentiation is a much broader idea than repeated multiplication, it is possible that fractional calculus might lead to interesting implications far beyond repeated differentiation. If you consider the $n$th order repeated integral of a constant over some bounds, the result can be interpreted as the size of a “square” in $n$ dimensional space, the length of an interval, the area of a square and the volume of a cube, and so on. Applying the same interpretation to fractional integration leads to the absurdity of considering the size of a “square” in a space that does not have an integer number of dimensions. In spite of this, fractional calculus has been put to practical use in solving problems in electrochemistry, anomalous diffusion, PID control theory and even classical mechanics. For example Niels Able showed in 1823 that the tautochrone problem can be expressed in terms of fractional calculus.
But there is a problem. There is not just one fractional derivative, in fact multiple fractional derivatives and integral operators have been found. All of these operators are fractional generalizations of differentiation and integration, and they are, in general, incompatible with each other. That is, two different fractional derivatives produces different results for the same function. So fractional calculus is not uniquely defined. Additionally continuing the analogy between fractional calculus and exponentiation you might expect fractional derivatives and integrals to follow the index rule $\frac{d^\alpha}{dx^\alpha}\frac{d^\beta}{dx^\beta} = \frac{d^{\alpha + \beta}}{dx^{\alpha + \beta}}$ for any real numbers $\alpha$ and $\beta$. This is not the case. For example, consider the Riemann-Liouville fractional integral $I_a^\alpha f(x) = \frac{1}{\Gamma(\alpha)} \int_a^x (x - t)^{\alpha - 1} f(t)dt$ with exponent $\alpha=\frac{1}{2}$ sometimes called the semi-integral. The derivative of the semi-integral of a constant is $\frac{d}{dx} (I_0^{1/2} b) = \frac{b}{\sqrt{\pi x}}$. However, the semi-integral of the derivative of a constant is $I_0^{1/2} (\frac{d}{dx} b) = 0$. So the Riemann-Liouville fractional integral does not satisfy the index rule, and more generally fractional derivatives and integrals do not satisfy the index rule. These two properties in contrast to the simple generalizing concept of fractional calculus just does not feel right. It is like an itch you cannot scratch. That being said mathematical objects often do not behave like you expect them to initially, and intuition about new things in math is often wrong. However, sometimes we are not asking the right questions, so math built on those ideas has may have quirks and hangups due to something that should have been added in, or left out. With that in mind, if we know where these two properties come from and what causes fractional calculus to behave like it does, then that might enable us to ask better questions or at least understand what was missing in our initial intuition of how it should behave.
Before jumping in, I should be clear about what I mean by fractional differentiation. For the purposes here I will consider a fractional derivative to be any operator that has the following properties. It must act on functions and have a variable called the index (or exponent), such that for positive values of the index it reproduces repeated differentiation, and is a linear operator like the derivative and integral operators.
The Failure of the Index Rule
Fractional differentiation and integration fails to satisfy the index rule. Why is this the case? Partially it is the result of differential calculus itself. The index rule would imply that $\frac{d^{-1}}{dx^{-1}}$ must be the inverse of the derivative since $\frac{d^{-1}}{dx^{-1}} \frac{d}{dx} = \frac{d}{dx} \frac{d^{-1}}{dx^{-1}} = \frac{d^0}{dx^0}$, where $\frac{d^0}{dx^0}$ is the identity operator. However, while the integral is nearly the inverse of the derivative, it is not actually invertible. This can be seen by taking the derivative of an integral $\frac{d}{dx} \int_a^x f(t)dt = f(x)$ and comparing that to the integral of a derivative $\int_a^x f’(t)dt = f(x) - f(a)$. So fractional derivatives, which by definition must reproduce differentiation for integer values, can not satisfy the index rule since $\frac{d^{-1}}{dx^{-1}}$ does not exist. At best the equation $\frac{d^\alpha}{dx^\alpha}\frac{d^\beta}{dx^\beta} = \frac{d^{\alpha + \beta}}{dx^{\alpha + \beta}}$ applies for some restricted range of $\alpha$ and $\beta$ such that it does not require the existence of an inverse.
This leads to an obvious workaround. What if we restricted the set of functions that we consider for calculus such that differentiation and integration is invertible? Considering functions of the form $f(x) = P(x)e^x$ where $P(x)$ is a finite order polynomial, let us call the set of all functions of that form $\mathbb{S}$. Given the set of functions $\mathbb{S}$, an integral operator $\int_{-\infty}^x f(t)dt$, and a derivative $\frac{d}{dx}$ the only antiderivative of a function in $\mathbb{S}$, which is also in the set $\mathbb{S}$, is given by the integral deffined above. That integral and derivative when restricted to functions in $\mathbb{S}$ are true inverses of each other. So applying this integral and derivative repeatedly, and using the Cauchy formula for repeated integration to simplify the integral, we can define the operator, \begin{equation*} J^n f(x)=\begin{cases}\frac{1}{(n -1)!}\int_{-\infty}^x (x - t)^{n - 1}f(t)dt, & \text{for $n \geq 1$} \newline f(x), & \text{for $n = 0$} \newline \frac{d^{|n|}}{dx^{|n|}} f(x), & \text{for $n \leq -1$}\end{cases} \end{equation*} This operator when acting on functions in $\mathbb{S}$ forms a group and satisfies the index rule $J^nJ^m f(x) = J^{n+m} f(x)$ for all $n, m \in \mathbb{Z}$. A fractional version of this operator does not imply any violation of the index rule, since it is invertible. So then it is possible, but not guaranteed, that a fractional version of this operator would satisfy the index rule.
Invertibility of the Derivative and Taylor Series
Restricting the domain of functions is one way to make integration invertible. Alternatively we can consider why differentiation is not invertible in the first place. This can be seen from the fundamental theorem of calculus, but a more direct way is to see the action of the derivative on a Taylor series. Given that complex analytic functions are uniquely defined by their Taylor series, a complex analytic function $f(z)$ can be represented as the sequence of complex numbers $(a_0, a_1, a_2, …, a_k, …)$ such that $f(z) = \Sigma_{k=0}^\infty \frac{a_k}{k!}z^k$. Now let us consider the derivative of $f(z)$. It is the function $f’(z) = \frac{d}{dz} f(z) = \Sigma_{k=0}^\infty \frac{a_{k + 1}}{k!}z^k$, which is represented by $(a_1, a_2, a_3, …, a_{k + 1}, …)$. So in terms of the sequence representation of analytic functions, the derivative is an operator which removes the first element from the sequence. So then the $n$th derivative of a sequence removes the first $n$ elements resulting in the representation $(a_n, a_{n + 1}, a_{n + 2}, …, a_{n + k}, …)$. Looking at differentiation in this way it is clear that differentiation is not invertible. It can not be inverted because taking the derivative of a function literally removes the information that was contained in the first element of the sequence, and there does not exist an operation which can, in general, reconstruct what the original first element was. Note that functions in the set $\mathbb{S}$ are defined such that their Taylor series at the point $z_0$ under the limit as $z_0 \to -\infty$ has the series representation $(0, 0, 0, …, 0, …)$. So no information is actually lost when taking the derivative of functions in $\mathbb{S}$ since it can be reconstructed trivially.
With this in mind if we want to make calculus invertible without significantly restricting the set of allowed functions, we could instead define a generalized function and derivative such that every time we differentiate one of these functions the leading term of the series representation gets packaged along with the new function. Then the inverse operator to the derivative necessarily exists. The inverse would unpack this extra information and put it as the first entry of the series representation followed by the terms in the series for the derivative. With this the integral and the derivative would be inverses of each other and in the case of repeated differentiation and integration the multiple pieces of extra information would need to be packed and unpacked sequentially. A version of calculus defined in this way is invertible and fractional calculus derived from these generalized functions could satisfy the index rule.
Conflicting Definitions of Fractional Calculus
Now that we have some idea of why fractional calculus normally can not satisfy the index rule and some ideas of how to get around it, let us consider the multiple definitions of fractional calculus. Is there any reason why fractional calculus needs multiple definitions, and where do these definitions come from? First I looked at the derivation of various fractional derivatives and fractional integrals, but there are a wide number of approaches to deriving fractional calculus all using different methods to define the operators. This demonstrated that there are multiple definitions of fractional derivatives and integrals, but it did not help me understand why that must be the case. I thought that there must be a better way to describe fractional calculus. In the end I did find a way of looking at fractional calculus that I find simplifies some aspects of it, and provides an answer to why there are multiple definitions.
When trying to see why the derivative is not invertible it was useful to think of functions as being represented by the collection of coefficients in the Taylor series. I have found that this representation is useful in this case as well. Now that we are considering fractional derivatives, in order to maintain the view of functions as being defined by Taylor series, I will assume that we only consider functions which are complex analytic and for which their fractional derivatives are all complex analytic as well. So given some suitable function $f(z)$ what is its fractional derivative, $\frac{d^\alpha}{dz^\alpha} f(z)$, in this series representation? Taking $\alpha$ as a variable the series representation as a function of $\alpha$ is $(a_0(\alpha), a_1(\alpha), a_2(\alpha), …, a_k(\alpha), …)$. If $\alpha=0$, then it reproduces the function $f(z)$, so $a_k(0) = a_k$. A fractional derivative by definition must reproduce repeated differentiation when the index is a positive integer, so given a positive integer $m$ then the functions $a_k(\alpha)$ must satisfy the equation $a_k(m) = a_{k + m}$. In this view a fractional derivative is an operator which takes the series representation of a function and produces a sequence of functions that must be compatible with the original sequence and when evaluated for a particular value of $\alpha$ it can be interpreted as the series representation of $\frac{d^\alpha}{dz^\alpha} f(z)$.
Now given this representation of fractional calculus, let us also assume that this fractional derivative satisfies the index rule $\frac{d^\alpha}{dz^\alpha}\frac{d^\beta}{dz^\beta} = \frac{d^{\alpha + \beta}}{dz^{\alpha + \beta}}$ for any $\alpha$ and $\beta$. However, we have already see that this is usually not possible. So we also have to assume that the derivative, being generalized, is invertible (either by restricting to a subset of analytic functions like the set $\mathbb{S}$, or by using generalized functions as mentioned before). Now given these assumptions, all of the functions $a_k(\alpha)$ in the series representation of a fractional derivative are necessarily defined in terms of each other. This is due to the statement $\frac{d^m}{dx^m}\frac{d^\alpha}{dx^\alpha} f(z) = \frac{d^{m + \alpha}}{dx^{m + \alpha}} f(z)$, where $m$ is a positive integer, which is true since this fractional derivative satisfies the index rule. So the functions $a_k(\alpha)$ must satisfy the equation $a_{k + m}(\alpha) = a_k(\alpha + m)$, then letting $k = 0$ we get, $a_m(\alpha) = a_0(\alpha + m)$. This means that if a fractional derivative satisfies the index rule, the series representation of the fractional derivative $\frac{d^\alpha}{dz^\alpha} f(z)$ is entirely defined by $a_0(\alpha)$, and all of the other functions in the sequence are shifted copies of that function.
Fractional Calculus and Interpolation
Thinking of this visually, we can plot the coefficients in the series representation of $f(x)$ as the sequence of points $(k, a_k)$. This produces a scatter plot that defines the function if it is analytic. The series representation of $\frac{d^\alpha}{dz^\alpha}f(z)$ is defined by $a_0(\alpha)$. We found previously that for a positive integer $m$ the function $a_k(m) = a_{k+m}$, letting $k=0$, then $a_0(m) = a_m$. Graphing the function $a_0(\alpha)$ note that it passes through all of the points $(k, a_0(k))$ which coincides with the points of the scatter plot. So the function which defines $\frac{d^\alpha}{dz^\alpha} f(z)$ is an interpolation between the points $(k, a_k)$. Conversely any procedure that given one such scatter plot defines a interpolation between the points $(k, a_k)$ can be viewed as defining a fractional derivative which satisfies the index rule.
With this connection between fractional differentiation and interpolation it is clear that because there are an infinite number of interpolations of any set of coefficients, then likewise there are an infinite number of fractional derivatives. These fractional derivatives also satisfy the conditions given previously for fractional derivatives with the addition of the index rule. So it is not just that there are multiple definitions of fractional differentiation, there in fact must be an infinite number of fractional derivatives, which are all incompatible with each other.
Like in the case of attempting to make calculus invertible I can think of two possible solutions go get around the fact that there is an infinite number of definitions of fractional differentiation. One way to select a unique fractional derivative is apply some “natural” constraint such for any suitable Taylor series there is only one interpolation that satisfies the constraint. This would in turn define a unique fractional derivative operator. Note that this is somewhat equivalent to just defining what fractional derivative will be used in the first place, except that it allows for the possibility of using multiple fractional derivatives provided they are “compatible” producing a larger set of functions for which this combined fractional derivative is defined. Alternatively, we could try to generalize functions to a large class of objects such that their interpolation is uniquely defined. Then all of the infinite number of different fractional derivatives would represent the same underling operation acting on different subsets of these generalized functions.