When computers do arithmetic with fixed length decimal (or binary) approximations, numbers of vastly different sizes cannot be accurately subtracted. For example, one million and one, , represented to only six decimals is . The difference between one million and one and one million is one, 1,000,001-1,000,000=1, but if this is done with a six-decimal machine, the difference is zero, . In this example, the baby is thrown out with the bath water, and there is 100% loss of accuracy. This kind of loss of a significant portion of the small difference between two relative large numbers plagues numerical computation.
The one general kind of series whose error term is the size of the next term is an alternating series of decreasing size terms,
Compare this sum with 1/S when you run the program with x=+30.0. Since the sum with x=+30 is approximately e30 and since 1/330=e-30, these should be approximately equal. Are they actually?
Compare your summation with the computer's built-in algorithm, for example,
1/N[Exp[30],10] N[Exp[-30],10] Mathematicacan do this sum with its built-in sum command in two ways, x = 30; N[Sum[ xn/n! , {n,0,100}]] or x = 30.0; Sum[ xn/n! , {n,0,100}] Also try to sum symbolically first with Sym = Together[Sum[yn/n!,{n,0,100}]]; y = -30; Print[Sym , N[Sym]] The numerical sum methods will give different answers that are not accurate because of the subtraction bug, but the specifics depend on your particular machine!Watch out for the subtraction bug, it could sneak up and bite your computer.