>0.3 cents is quite a bit different than your earlier claim --- "completely pukes at 4 decimals".
Yep - plug 100M into your just posted code. Was it correct? Nope?
>So within acceptable legal tolerance.
Yes, when you hand craft to solve one specific instance by carefully tuning. I noticed you ignored posting code you claim will handle mortgages in general.
Care to cite a law you think gives "legal tolerance"? I suspect you're making that up. You must mean 'within my understanding that being within a cent on a single transaction is ok" which is simply not true.
Not when you process thousands of loans (I develop the algorithm used to price tranches for mortgage bundling for a large mortgage company when I was in grad school - I do know a bit about this space, and I certainly know a lot about numerics - floating-point, fixed point, adaptive, unums, the whole lot - you're simply compounding your errors).
>My standard, generalized, library routine is equally brief and works for amounts up to $100 billion with any interest rate expressed out to 3 decimals --- with nary a float in sight.
Post it :) Even tell me what numbers you think it handles. I bet I still break it, and my naive floating-point one above handles it.
I don't think you understand floating-point. Do you ever check condition numbers on your code? Do you know what condition numbers are? I'll take you inability to post this simple magic algorithm you claim you have as evidence you don't have it.
For anyone following this thread, this example pretty clearly shows why naive replacement is going to bite you.
The generalized routine below provides compound interest Future Value using scaled integer math (no floats) to 14-15 digits of accuracy --- roughly comparable to a double float while avoiding comparison decoherence. It aims for results accurate to the penny while a negative result indicates an overflow failure.
In comparison, floating point math provides no warning when the accuracy of the mantissa is exceeded --- out of sight, out of mind --- by design.
scaleFV(P,R,C,Y) int64
SP := 10000 //temporary principal scale factor, 4 decimals
if (P>SP*1000) SP *= 10 //extend accuracy for larger amounts
SR := 1000 //annual rate scale factor, 3 decimals (pre-
applied to R)
N := C * Y //total number of compounding periods
D := C * 100 * SR //period rate divisor
P := P * SP //scale the principal (int64)
while N>0 do
P := P + ((P * R) div D) //compound principal with period interest
decr N //count the period
return (P + SP div 2) div SP //unscale result, round back to nearest
penny
Example:
P = 10000000000 $100 million in pennies
R = 8000 8 percent scaled to 3 decimals
C = 12 12 compounding periods per year
Y = 30 30 years
Result = scaleFV(P,R,C,Y) = 109357296578 pennies or $1,093,572,965.78
Yes, you're right --- that is relatively simple. Nothing there that can't be easily reduced to simple 4 function *integer* arithmetic. The only complication is using appropriate range and scale to meet real world requirements.
By why should I bother? I've already demonstrated how to achieve the impossible.
You will reduce finding the 5th root of a number to a 4 function arithmetic? This should be a nice trick. Are you suggesting people do a bottom up search from 0? Or some other mind blowing mathematical property of exponents and logarithms that the top minds in the field have somehow not realized?
Ah man, using Newton raphson to find a numerical root of a single number. Even Taylor series would’ve been an acceptable answer but this just looks like you searched “root finding” and pasted the first result here. The mistakes are just compounding the more you talk.
Not to mention you now have to do your nice little penny wise adjustment on each iteration of a root finding algo to keep it in the confines of your imaginary system. I can’t even.
Ah man. There is a recurring pattern here ---- first you tell me it can't be done. Then you tell me you don't like how I would do it.
You handheld calculator with it's 8 bit processor is proof that almost any mathematically function can be reduced to basic 4 function integer arithmetic.
I’m telling you you have no idea how anything works or why design choices are made in fields where you’re holding forth with an authority inversely proportional to your ignorance. Your very basic and ignorant idea of mathematics or modern finance isn’t worth my time. Especially if you think newton raphson should be any sort of standard for finding numerical roots in 2022. I shudder to think of how you’d approach a seemingly irrational exponent or something. Or how you’d incorporate something as simple as e^x. Would you use a rainbow table? The possibilities for abuse are truly endless.
Especially if you think newton raphson should be any sort of standard for finding numerical roots in 2022.
Nice straw man. I never proposed a standard --- only that it is possible.
But if you look too closely at whatever method you are using now --- you will likely find an algorithm that you will scoff at. There are only a handful of options and "magic" isn't one of them.
*Anything* a computer does is ultimately reduced down to the most basic mathematical functions using some sort of algorithm. The fact that you don't know it or see it doesn't mean it's not there.
I mean this is now an absurd conversation. You’re claiming you’ll break down all mathematical functions and convert them to work exclusively with integers instead of real numbers. So you’re proposing writing an entirely new mathematical standard and acting as though it can be achieved without any investment of time and effort because you’re able to demonstrate some simple calculation that has been done for several centuries by hand.
I’m not even sure how I’m still conversing on these absurdities despite noting them.
Don't worry - I just started looking at his code - it fails on tons of common examples, completely demonstrating what we pointed out. There's no need to even have him try to compute things past what he claimed his ever changing claims can do.
I mean, even not knowing the nuances of modern implementations of floating point arithmetic (which even I don’t fully grasp since I work very very far from the silicon on a daily basis), the whole concept of “I can reduce finance math to 4 operations” is absurd beyond reason. Like what will you do? Write a new root finding algo? Create a method that directly interfaces with the highly optimized logarithm calculating chips on modern microprocessors? Create your own new silicon for what is essentially just a special case of all modern usage that can be perfectly achieved with off the shelf hardware?
1) I posted my code as ` total = principal * (1+((rate/100)/t)) ^ (n*t)`, and you claimed your 10 line "My standard, generalized, library routine is equally brief". By "equally brief" did you mean an order of magnitude larger?
2) Your code and example does not work using 64 bit integers. Using only 64 bit integer math, your code gives 11605682736, off by 90% (easily checked in C/C++/C#). Your math overflows, so you're not using scaled integers; in order to work, your code requires arbitrary sized integers for the loop. Did you not realize this? So you're not using scaled integers as int64s. You're using unbounded sized integers. Your (P*R) term repeatedly overflows a 64 bit int when calculating your example.
3) If you're going to use arbitrary sized integers, then simply compute the exact numerator and denominator, then divide. It's actually accurate, as opposed to your mess. And it's simple to code.
4) You claim "a negative result indicates an overflow failure", which is wrong, since it can overflow twice (or more) in the same calculation, and, since you're using arbitrary sized integers internally, the conversion depends on non-portable behavior. Both of these can be demonstrated quite easily.
5) You claim "floating point math provides no warning when the accuracy of the mantissa is exceeded," which is wrong - it marks it with an Infinty using IEEE 754 math, required by a significant amount of languages, and provided on all major pieces of hardware), which will never revert as your error code can. And for someone understanding numerics (and condition numbers), the calculation is easy to extend with the accuracy of the result so the used will know error bounds.
6) Your code is amazingly slow: your int64 version (which fails almost all tests) is ~6000x slower than the pure double version, and the arbitrary sized int one is 45,000x slower (C#, Release build, tests run over the examples below).
7) Examples where your algorithm fails. Inputs are numbers that occur in real mortgages: interest 18.45% happened in the 1980s, 50, 75, 100 year terms exist (50 is getting common in CA), 24 payments represents bi-weekly payoffs (and some places compound daily, or 365 times a year [1]). To test accuracy of even your int64 routine, lower principal amounts are below to show both your routines (the honest int64 and the arbitratry integer one) still fail. "My routines" are the double one above and, since you require arbitrary sized integers, I'll test one that simply computes exact numerator and denominator then divides:
10000,18450,365,50 -> both your int64 and arbitrary precision are off by $10.08, my routines are both perfect.
Want a lower interest rate?
10000,8000,365,50 -> both of yours are off by $0.12, both mine are still perfect.
Let's push them with a longer term, up the principal:
100000,8000,365,100 -> both yours are off by $6.72, both mine are perfect
Now, since your int64 version pukes immediately on larger numbers, let's only look at the arbitrary sized versions.
500000,8000,365,100 -> yours off by $0.68, mine perfect
Maybe the problem is the daily compounding?
100000000,18450,12,50 -> yours off by $0.03, mine perfect
100000000,18450,12,100 -> yours off by $0.34, mine perfect
And for fun, let's look at a $100k loan, 15%, compounded hourly (say 8760 times a year) like a bad credit loan, for 25 years.... (Note your routine is stupidly slow for this one):
100000,15000,8760,25 - both yours are off by $1.21, both mine are correct.
I can keep producing errors all over this frontier.
8) So, to avoid "comparison decoherence" (!?) because you don't understand floating point (which can be made bit exact, as I've done on many projects for binary file format interop), you instead produce demonstrably buggy, numerically faulty, slow, memory unbounded code?
This is why people should not take advice on numerics from someone that does not understand numerics.
#include <iostream>
#include <cstdint>
int64_t scaleFV(int64_t P, int64_t R, int64_t C, int64_t Y)
{
int64_t SP = 10000; // temporary principal scale factor, 4 decimals
if (P > SP * 1000) SP *= 10; // extend accuracy for larger amounts
int64_t SR = 1000; // annual rate scale factor, 3 decimals (pre-applied to R)
int64_t N = C * Y; // total number of compounding periods
int64_t D = C * 100 * SR; // period rate divisor
P = P * SP; // scale the principal (int64_t)
while (N > 0)
{
std::cout << P * R << std::endl; // watch the errors fly!
P = P + ((P * R) / D); // compound principal with period interest
N--; // count the period
}
return (P + SP / 2) / SP; // unscale result, round back to nearest penny
}
int main()
{
std::cout << scaleFV(10000000000L, 8000, 12, 30) << " = 11605682736 != 109357296578\n" << std::endl;
}
I added a print in your loop to show you the overflow. Good luck. To help you, P * R for this calculation gets about 4x as big as an int 64 can hold.
Another way to see it is your answer 109357296578, times your scaling 100000, requires P to hold > 53 bits. But your rate of 8000 is almost 13 bits. So P * R cannot fit in a 64 (well 63 for signed) int.
If you cannot fix it, care to explain how your code works only with int64 as you claim here and in other threads on this page? Maybe then you can address the errors I listed above where your routine failed?
If you cannot fix it, care to explain how your code works only with int64 as you claim here and in other threads on this page?
Yes, your code pukes all over itself. And mine doesn't. Why?
For more than 2 decades now, Intel processors have included SSE "extensions" with a whole bank of 128 bit registers (XMM0 thru XMM15) with specialized math instructions for integer and floating point.
The compiler I use emits SSE opcodes by default for operations on 64 bit integers when building 64 bit executables. In other words, 128 bit processor registers are being used for the calculations. Overflow occurs when the final resultant is too large for int64.
>For more than 2 decades now, Intel processors have included SSE "extensions" with a whole bank of 128 bit registers (XMM0 thru XMM15) with specialized math instructions for integer and floating point.
That's interesting, since Intel did not add 128 bit wide integer math in SSE. Those 128 bit registers were SIMD, meaning multiple data, meaning at most 64 bit integers. Later extensions (not two decades back) added larger registers. I wrote the article for Intel for the release for AVX in 2011 [2], where Intel expanded the registers to 256 bit (still no 128 bit integer math). But there has certainly not been 128 bit wide integer operations for two decades. There has been 128 bit registers split into at most 64 bit sized components, the M in SIMD. I also wrote decompilers for Intel binaries, and just looked through that code, and again, no 128 bit integers. Are you confusing 128 (or 256 or 512) bit wide registers that are split into components as actually 128 or 256 or 512 bit integer registers? Are you making stuff up?
Intel intrinsics denote register size and SIMD size. For example, _mm256_add_epi16 means use 256 bit registers, and add packed integers of 16 bit size. There are no _epi128 intrinsics, only size 8,16,32,64 [1]. Another interesting place to look for these 128 bit integer instructions is [3]. Lots of IntN for N = 8,16,32,64, none for N=128. Here's [4] the Intel Software Development Manual from April 2022.... Also not seeing them. Section 4.6.2 lists supported data types - not a single 128 bit integer on that page. I don't see them in the AVX and other extension sections either.
So I'm super interested in your compiler and language that automatically emits 128 bit wide integer SIMD instructions for Intel, since they are not in the opcode or intrinsic lists. Please name the language and compiler, and even post some working code to demonstrate this auto extension to 128 bit math.
And, if you're using 128 bit registers, why would you pick 64 bit math, which fails for all the cases above? You still have not addressed that any size register fails on the examples I posted above, including your auto-extending non-portable compiler tricks.
For example, $100000,8000,365,100 fails even on 128 bit, 256 bit, even infinite bit length registers. Because your algorithm itself is bad.
So, care to post your compiler, language, and code? Also, why did you keep telling us it was 64 bit when it wasn't?
For example, $100000,8000,365,100 fails even on 128 bit, 256 bit, even infinite bit length registers. Because your algorithm itself is bad.
Really? So now we've progressed from $100 thousand to $100 million to the national budget?
Anything that can't handle the national budget is "bad"?
Every algorithm "fails" when pushed beyond it's limits --- even the ones you use based on double precision floats but they do so silently by losing precision in the mantissa which is only 52 bits.
Out of sight, out of mind don't mean it's always "right". By the standard you're applying, your own algorithm itself is equally "bad".
So, where are your 128 bit SSE instructions? What compiler? What language?
Interestingly, Intel's own compiler, when operating on int128, does not emit these instructions you claim exist (you can check it on Godbolt.org and look at disassembly). Maybe you should tell them about these instructions.
Why does your routine fail for simple cases that the floating point does not?
Please stop deflecting. Can you post code, compiler, and language or not?
>Really? So now we've progressed from $100 thousand to $100 million to the national budget?
That example is for a $100,000 future value where your algorithm fails. It is not national budget.
Did you even try the examples I demonstrated where your algorithm fails?
>By the standard you're applying, your own algorithm itself is equally "bad".
Yet it's incredibly faster, does not rely on lying about mythical instruction sets, and handles simple cases yours didn't, even cases you claimed yours did handle.
Oh, and it uses honest 64 bit hardware.
So, code and compiler to demonstrate your SSE claims, or this thread has demonstrated what I expected it to.
Ah, so no reply on your compiler and language that makes 128 bit SSE code? Makes sense, since the instructions you claimed to use don't exist.
I wanted to test to see if I can even find cases where your algorithm works but the normal floating point one doesn't, and made a neat discovery.
*Your algorithm fails significantly in every range I test it.*
Here's a simple example: pick a normal loan, say 5%, 5 years, compounded monthly, and check your algorithm for every loan value in $1000 to $2000. Such small numbers, you'd think even your algorithm would work. No int64 overflows in sight.
It fails for 333 values in this range. The first is few are $1000.34, $1006.41, $1007.01; the last few are $1993.71, $1999.18, $1999.78.
Test these :)
In fact, no matter what reasonable rates, compoundings, and time lengths I put, for any range of principals, your routine fails. Try it: pick R,C,Y, a starting P value, then test, add 1 to P, test again, and you will fail over and over and over. The double based method works. Amazing.
Another example, try 7%, monthly, 8 years, $10,000 to $20,000, and you get 6337 errors. Largest failures are at $19,998.43, 19,999.58. Smallest at 10,013.60, 10,014.75.
No failures for the double based code.
Every single test I try like this, yours has a spectacular number of failures, the double has none.
So you can try to add more scaling, which breaks other parts. If you carefully analyze, you can prove that your method will fail for reasonable values no matter what scaling tricks you try and play. For fixed precision you simply will lose too much off the top or from the bottom for rates used in mortgages. You honestly need a sliding accuracy to make it work in 64 bits.
None of these values fail for the double based routine.
On the front of trying to find cases where one routine fails but the other doesn't, I set the random ranges large enough to make both routines fail from time to time to get errors, then checked to see where yours might work and the floating point fails.
I guess that puts the nail in the coffin, right? Yours fails on every range, and out of this 10,000 random value test, yours failed 3596 times the double one didn't. The double one failed only 2 times that yours didn't. Both failed a lot overall. This test is how I discovered that yours actually fails in places it seemingly should not, like everywhere.
Did you ever test yours?
"My standard, generalized, library routine is equally brief and works for amounts up to $100 billion with any interest rate expressed out to 3 decimals"..... I hope you're not using this for anything real world!
This thread is my new go to for an example when I teach numerical methods stuff, to show people that naïve trying to beat floating point pretty much always fails.
Now it's completely transparent to anyone reading this far why using fixed point is almost always a terrible idea, even for simple things like naïve future value calculations, even when an absolutely certain master of fixed point like yourself claims it and even provides an "algorithm."
Oh, another useful fact for you - you claimed your routine is good up to $100 billion, that it takes input in pennies, and interest scaled by 1000. Your temp scale factor starts as 10,000, and then is multiplied by 10, so your principal in pennies is scaled by 100,000 before the loop.
The first operation in the loop requires computing P * R. For a rate, say of 8.000%, your rate is 8000, so computing P * R is ($100B * 100) * (100,000) * 8000 = 8 * 10^21, which is a 73 bit number (74 bit for signed).
How exactly do you fit this 74 bit computation in a 64 bit signed int again?
Hopefully this helps you understand your scaled integer solution better.
Yep - plug 100M into your just posted code. Was it correct? Nope?
>So within acceptable legal tolerance.
Yes, when you hand craft to solve one specific instance by carefully tuning. I noticed you ignored posting code you claim will handle mortgages in general.
Care to cite a law you think gives "legal tolerance"? I suspect you're making that up. You must mean 'within my understanding that being within a cent on a single transaction is ok" which is simply not true.
Not when you process thousands of loans (I develop the algorithm used to price tranches for mortgage bundling for a large mortgage company when I was in grad school - I do know a bit about this space, and I certainly know a lot about numerics - floating-point, fixed point, adaptive, unums, the whole lot - you're simply compounding your errors).
>My standard, generalized, library routine is equally brief and works for amounts up to $100 billion with any interest rate expressed out to 3 decimals --- with nary a float in sight.
Post it :) Even tell me what numbers you think it handles. I bet I still break it, and my naive floating-point one above handles it.
I don't think you understand floating-point. Do you ever check condition numbers on your code? Do you know what condition numbers are? I'll take you inability to post this simple magic algorithm you claim you have as evidence you don't have it.
For anyone following this thread, this example pretty clearly shows why naive replacement is going to bite you.