I recently draw attention to a toy lisp in Forth.
The example program was a scheme program that counts
how many changes in coins are possible for e.g. 100 dollarcent.
The lisp had to improved to be able to compile the program. The
improved version can hardly manage an input of 100 and takes a
noticable time, 200 and following fails.
A more sane program fills in a table where each entry is derived
from previous entries.
Recursion is silly (if you can avoid it).
Not really.
Recursion is elegant — but actually isn't any
more efficient than iteration, and of course
it requires stack space.
And yes, most of the time (or always?) there
does exist iterative counterpart solution
This is a kind of reversed dynamic programming.
A more sane program fills in a table where each entry is derived
from previous entries. A possibility to change for 100 cent is e.g.
adding 10 cent to the possibilities for 90 cent.
This is a kind of reversed dynamic programming.
Recursion is silly (if you can avoid it).
I recently draw attention to a toy lisp in Forth.
The example program was a scheme program that counts
how many changes in coins are possible for e.g. 100 dollarcent.
The lisp had to improved to be able to compile the program.
The improved version can hardly manage an input of 100 and takes
a noticable time, 200 and following fails.
This is a kind of reversed dynamic programming.That is usually called memoization.
I haven't tried this so I may be missing something, but the above
follows a common pattern.
Recursion is silly (if you can avoid it).
Not really.
Recursion is elegant — but actually isn't any
more efficient than iteration, and of course
it requires stack space.
And yes, most of the time (or always?) there
does exist iterative counterpart solution
I guess you are computing something different?
On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
wrote:
Recursion is silly (if you can avoid it).
Not really.
Recursion is elegant — but actually isn't any
more efficient than iteration, and of course
it requires stack space.
Not necessarily. Scheme compilers actually are *required* to turn
self recursion into a loop (except in the case of a supported "debug"
mode).
I guess you are computing something different?
The time for 1,000,000 is also quite large, 15.24s vs 10ms (with iForth).
1000000 | 1333983445341383545001 | 15.24 | 580696
On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
wrote:
Recursion is silly (if you can avoid it).
Not really.
Recursion is elegant — but actually isn't any
more efficient than iteration, and of course
it requires stack space.
Not necessarily. Scheme compilers actually are *required* to turn
self recursion into a loop (except in the case of a supported "debug"
mode).
I think some of the Lisps can do it also - at least with
(optimize (debug 0) (space 3))
And yes, most of the time (or always?) there
does exist iterative counterpart solution
Always. Given enough resources (which could be code or memory space),
any recursion can be turned into an equivalent loop, and vice versa.
1000000 | 1333983445341383545001 | 15.24 | 580696
That is quite different from the result none showed.
| time coinsusa 1,000,000
| 666793341266850001
| real 0m0.353s
I guess you are computing something different?
The time for 1,000,000 is also quite large, 15.24s vs 10ms (with iForth).
-marcel--
albert@cherry.(none) (albert) writes:
A more sane program fills in a table where each entry is derived
from previous entries. A possibility to change for 100 cent is e.g.
adding 10 cent to the possibilities for 90 cent.
This is a kind of reversed dynamic programming.
What is reversed about this?
- Anton
Paul Rubin <no.email@nospam.invalid> writes:
I haven't tried this so I may be missing something, but the above
follows a common pattern.
I still haven't worked out the details of the force/delay approach, but
here (at bottom) is a Scheme version that implements memoization using a
hash table by brute force. 100 cents and 200 cents take no perceptible
time.
I ran it with Guile 3.0 (compilation enabled) and got these results on a >small Hetzner VM (number of cents to be changed, result, cpu time in
seconds, and ram consumption in KB):
|---------+------------------------+-------+--------|
| amt | result | sec | KB |
|---------+------------------------+-------+--------|
| 100 | 293 | 0.01 | 9264 |
| 200 | 2728 | 0.01 | 9340 |
| 1000 | 2103596 | 0.01 | 10080 |
| 10000 | 139946140451 | 0.13 | 15176 |
| 100000 | 13398445413854501 | 1.25 | 67564 |
| 500000 | 41707305668679272501 | 5.76 | 293964 |
| 1000000 | 1333983445341383545001 | 15.24 | 580696 |
|---------+------------------------+-------+--------|
================================================================
mhx@iae.nl (mhx) writes:
I guess you are computing something different?
Aha, one difference I notice is that Albert's code doesn't have a 100
cent coin. I included that because we have them in the US, though they >aren't used much except for in parking meters. They were introduced
partly for use in vending machines, but those now tend to have dollar
bill slots and sometimes also take credit cards.
Let me try again:
|---------+--------------------+-------+--------|
| 100 | 292 | 0.13 | 33644 |
| 200 | 2435 | 0.00 | 9412 |
| 1000 | 801451 | 0.01 | 9852 |
| 10000 | 6794128501 | 0.05 | 14316 |
| 100000 | 66793412685001 | 1.11 | 63904 |
| 500000 | 41682501983425001 | 6.07 | 240260 |
| 1000000 | 666793341266850001 | 10.67 | 475044 |
|---------+--------------------+-------+--------|
So we're getting the same results for 1000000 now, but my version is
still a lot slower. Maybe the memoization is not working right. Hmm.
You must be kidding. The memoization using standard tools impressed me. Compared to the result below it is only one order of magnitude.
albert@cherry.(none) (albert) writes:
You must be kidding. The memoization using standard tools impressed me.
Compared to the result below it is only one order of magnitude.
I see, so the algorithms are basically similar. I haven't examined your
code closely yet. But I see you are using a much more efficient memo
table than I am. Yours is indexed on the offset into the coin list,
while mine uses the entire list. I might try adapting mine, maybe to
use arrays instead of a hash table.
I noticed I got an 8x speedup and 5x memory reduction by simply
reversing the order of the coin list, i.e. using (100 50 25 10 5 1)
instead of (1 5 10 25 50 100).
In reality, 50 cent coins and dollar coins are rarely used in the US.
50 cent coins are large enough to be cumbersome, and dollar coins never >became popular because they are too easily confused with quarters (they
are just slightly larger).
Canada has all the coins that the US has, plus a 2 dollar coin, and they >actually use them. France before the Euro had even more, I believe.
R2DUP DUP >R first_denomination - R> RECURSE >R
From the FysForth days. It will run out of stack before it
encounters out of memory, but it clocks less than 1ms when doing
900000 count_change .
Don't ask me how it works :--)
-marcel
DOC
albert@cherry.(none) (albert) writes:
You must be kidding. The memoization using standard tools impressed me.
Compared to the result below it is only one order of magnitude.
I see, so the algorithms are basically similar. I haven't examined your
code closely yet. But I see you are using a much more efficient memo
table than I am. Yours is indexed on the offset into the coin list,
while mine uses the entire list. I might try adapting mine, maybe to
use arrays instead of a hash table.
I noticed I got an 8x speedup and 5x memory reduction by simply
reversing the order of the coin list, i.e. using (100 50 25 10 5 1)
instead of (1 5 10 25 50 100).
CREATE kind-of-coins 50 , 25 , 10 , 5 , 1 , 0 ,17c22,24
: add DUP stride GCD TO stride
limit 1+ OVER DO I OVER - aap[] 2@ I aap[] +2! stride +LOOP DROP ;
On 2023-12-27, George Neuner <gneuner2@comcast.net> wrote:
On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
wrote:
Recursion is silly (if you can avoid it).
Not really.
Recursion is elegant — but actually isn't any
more efficient than iteration, and of course
it requires stack space.
Not necessarily. Scheme compilers actually are *required* to turn
self recursion into a loop (except in the case of a supported "debug"
mode).
Careful there: not all self-recursion is tail recursion!
For instance, the obvious way to write a recursive factorial fails
to be tail-recursive, but can be rewritten to tail from with
accumulator passing.
(I know you know all this, likely better than me, but `tis the
season to have your egg nogged.)
On 28/12/2023 9:44 am, George Neuner wrote:
On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
wrote:
Recursion is silly (if you can avoid it).
Not really.
Recursion is elegant — but actually isn't any
more efficient than iteration, and of course
it requires stack space.
Not necessarily. Scheme compilers actually are *required* to turn
self recursion into a loop (except in the case of a supported "debug"
mode).
I think some of the Lisps can do it also - at least with
(optimize (debug 0) (space 3))
That's kind of offensive (to the programmer) which - not unlike ChatGPT - >puts humans in a diminished role. So the question becomes will humans have >any role at all? Much has been said about 'general purpose subroutines'. >ISTM the only logical reason to have them and (by implication) standards
is that they benefit automation.
On Wed, 27 Dec 2023 23:12:07 -0000 (UTC), Kaz Kylheku
<433-929-6894@kylheku.com> wrote:
On 2023-12-27, George Neuner <gneuner2@comcast.net> wrote:
On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
wrote:
Recursion is silly (if you can avoid it).
Not really.
Recursion is elegant — but actually isn't any
more efficient than iteration, and of course
it requires stack space.
Not necessarily. Scheme compilers actually are *required* to turn
self recursion into a loop (except in the case of a supported "debug"
mode).
Careful there: not all self-recursion is tail recursion!
Oops, you're right. I should have said Scheme is required to turn *tail-recursion* into a loop.
However, it is true that any recursion can be turned into a loop, and
However, it is true that any recursion can be turned into a loop, and
Can it, by an algorithm? That's a theoretical question that's above my
pay grade.
Kaz Kylheku <433-929-6894@kylheku.com> writes:
However, it is true that any recursion can be turned into a loop, and
Can it, by an algorithm? That's a theoretical question that's above my
pay grade.
Yes of course, every compiler has to return recursion into a loop, since assembly language doesn't have recursion. You push the recursive args
onto a stack, and then the loop pops args from the stack and processes
them. Example (quicksort):
push initial arg onto stack
while (stack not empty):
pop arg from stack
create two partitions p1 and p2
push both onto stack
I think what he had in mind is an iteration ``without a stack''. It's
not clear what that would mean. If I translate a stack automaton to a
finite state machine, am I using a stack?
It seems Stephen Cook has proved that whatever a stack automaton can do
with complexity f(n) a finite automaton can do with O(n lg n). Knuth
tells the story in
Julieta Shem <jshem@yaxenu.org> writes:
I think what he had in mind is an iteration ``without a stack''. It's
not clear what that would mean. If I translate a stack automaton to a
finite state machine, am I using a stack?
You can't in general turn it to a pure FSM. You need some kind of
auxiliary storage. That is necessary by the time hierarchy theorem,
the classification languges by the Chomsky hierarchy, etc. For example, consider a string of parentheses like (()((((()())))))))()())
Are the parentheses balanced? You can tell by counting them, but a pure
FSM can't do that.
It seems Stephen Cook has proved that whatever a stack automaton can do
with complexity f(n) a finite automaton can do with O(n lg n). Knuth
tells the story in
That must be a misunderstanding, but I'll watch the video when I get a chance. It's always worth listening to Knuth.
Therefore, your computer is actually a finite state machine.
From the FysForth days. It will run out of stack before it
encounters out of memory, but it clocks less than 1ms when doing
900000 count_change .
Don't ask me how it works :--)
-marcel
DOC
(*
P54C-166 iForth 1.11e
FORTH> count_changes
50 50 0.000 seconds elapsed.
100 292 0.000 seconds elapsed.
150 972 0.001 seconds elapsed.
200 2435 0.000 seconds elapsed.
250 5126 0.000 seconds elapsed.
300 9590 0.001 seconds elapsed.
350 16472 0.000 seconds elapsed.
400 26517 0.000 seconds elapsed.
450 40570 0.001 seconds elapsed.
500 59576 0.000 seconds elapsed.
550 84580 0.000 seconds elapsed.
600 116727 0.001 seconds elapsed.
650 157262 0.000 seconds elapsed.
700 207530 0.000 seconds elapsed.
750 268976 0.001 seconds elapsed.
800 343145 0.001 seconds elapsed.
850 431682 0.000 seconds elapsed.
900 536332 0.001 seconds elapsed.
950 658940 0.000 seconds elapsed.
1000 801451 0.000 seconds elapsed.
1050 965910 0.001 seconds elapsed.
1100 1154462 0.001 seconds elapsed.
1150 1369352 0.000 seconds elapsed.
1200 1612925 0.001 seconds elapsed.
1250 1887626 0.001 seconds elapsed.
1300 2196000 0.000 seconds elapsed.
1350 2540692 0.000 seconds elapsed.
1400 2924447 0.001 seconds elapsed.
1450 3350110 0.001 seconds elapsed.
1500 3820626 0.000 seconds elapsed. ok
*)
ENDDOC
On 2023-12-28, George Neuner <gneuner2@comcast.net> wrote:
However, it is true that any recursion can be turned into a loop, ...
Can it, by an algorithm? That's a theoretical question that's above my pay grade.
(Or, well, below; if I were a CS academic, I'd make less.)
It doesn't seem obvious to me. Consider a graph traversal (like garbage >collection marking). When you visit N pointers of a graph node, only the
last visit can be a tail call. The first N-1 have to return to that
node.
There is a mildly famous (in some circles) pointer-reversal trick that
has been used to implement a stackless traversal for garbage collection.
But it's not a simple code transformation; it mutates the nodes in order
to, effectively, create a return stack within the graph itself.
That could not possibly be a valid automatic compiler optimization.
Rubin's program also can be run with an older version of Guile (2.0.13).
My guess is that Rubin made a fresh start each time
with the mysterious ?singleton call.
\ SORTI (slightly slower than SORTM, slightly faster than SORTI2)
: sorti ( addr u -- )
cells over + 0 -rot begin ( 0 al1 ah1 .. aln ahn )
begin
over cell+ over u>= while
2drop dup 0= if
drop exit then
repeat
partition
again ;
Yes of course, every compiler has to return recursion into a loop, since >assembly language doesn't have recursion.
You push the recursive args
onto a stack, and then the loop pops args from the stack and processes
them. Example (quicksort):
push initial arg onto stack
while (stack not empty):
pop arg from stack
create two partitions p1 and p2
push both onto stack
Kaz Kylheku <433-929-6894@kylheku.com> writes:
However, it is true that any recursion can be turned into a loop, and
Can it, by an algorithm? That's a theoretical question that's above my
pay grade.
Yes of course, every compiler has to return recursion into a loop, since >assembly language doesn't have recursion. You push the recursive args
onto a stack, and then the loop pops args from the stack and processes
them. Example (quicksort):
push initial arg onto stack
while (stack not empty):
pop arg from stack
create two partitions p1 and p2
push both onto stack
\ SORTM (best-performing)
: sort2 ( al ah -- )
begin
over cell+ over u< while
partition recurse
repeat
2drop ;
- anton--
In article <2023Dec30.084142@mips.complang.tuwien.ac.at>,
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
\ SORTM (best-performing)
: sort2 ( al ah -- )
begin
over cell+ over u< while
partition recurse
repeat
2drop ;
I hate it if one uses U< in comparing numbers that hapenns
to be positive.
On 2023-12-28, George Neuner <gneuner2@comcast.net> wrote:
On Wed, 27 Dec 2023 23:12:07 -0000 (UTC), Kaz Kylheku >><433-929-6894@kylheku.com> wrote:
On 2023-12-27, George Neuner <gneuner2@comcast.net> wrote:
On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
wrote:
Recursion is silly (if you can avoid it).
Not really.
Recursion is elegant — but actually isn't any
more efficient than iteration, and of course
it requires stack space.
Not necessarily. Scheme compilers actually are *required* to turn
self recursion into a loop (except in the case of a supported "debug"
mode).
Careful there: not all self-recursion is tail recursion!
Oops, you're right. I should have said Scheme is required to turn
*tail-recursion* into a loop.
However, it is true that any recursion can be turned into a loop, and
Can it, by an algorithm? That's a theoretical question that's above my pay grade.
(Or, well, below; if I were a CS academic, I'd make less.)
It doesn't seem obvious to me. Consider a graph traversal (like garbage >collection marking). When you visit N pointers of a graph node, only the
last visit can be a tail call. The first N-1 have to return to that
node.
There is a mildly famous (in some circles) pointer-reversal trick that
has been used to implement a stackless traversal for garbage collection.
But it's not a simple code transformation; it mutates the nodes in order
to, effectively, create a return stack within the graph itself.
That could not possibly be a valid automatic compiler optimization.
----
TXRg ProgramminLanguage: http://nongnu.org/txr
albert@cherry.(none) (albert) writes:
In article <2023Dec30.084142@mips.complang.tuwien.ac.at>,
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
\ SORTM (best-performing)
: sort2 ( al ah -- )
begin
over cell+ over u< while
partition recurse
repeat
2drop ;
I hate it if one uses U< in comparing numbers that hapenns
to be positive.
What makes you think that this code compares numbers with U<? It
actually compares two addresses, and I certainly don't want my sorting >programs to fail when the sorted array straddles the addresses
2147483647 and 2147483648 on a 32-bit system. Note that an "a" prefix >typically does not indicate a signed number.
Followups set to comp.lang.forth
- anton
albert@cherry.(none) (albert) writes:
Rubin's program also can be run with an older version of Guile (2.0.13).
My guess is that Rubin made a fresh start each time
Yes, I re-ran the program for each value, to get the timing.
with the mysterious ?singleton call.
singleton? just checks whether a list has exactly one element. It is
used to detect the case where only one coin is available to make change
with. In that case, there are either 0 ways or 1 way to make change, >depending on whether the coin denomination divides the amount to be
changed.
you overwrite the `cc to henceforth to use the memoizer.
That means that the second calculation of the same results
is immediate and that larger results benefit from the
smaller results that are calculated before them.
True?
On the other hand the alternative algorithm was easier to understand
even to lispers who are thoroughly familiar with recursion.
It runs fast without the need for external optimisation like a
memoization.
albert@cherry.(none) (albert) writes:
On the other hand the alternative algorithm was easier to understand
even to lispers who are thoroughly familiar with recursion.
It runs fast without the need for external optimisation like a
memoization.
I don't think I examined it, but is that the one that iteratively fills
the lookup table before doing the requested value?
I found in the memoizing Scheme version, after computing (cc 1000000),
the memo table held 670004 entries, which is much less than the 5
million that would be used by precomputing all the earlier entries.
When I removed the optimization (I originally hadn't thought of it as
one) of quickly checking divisibility when only one coin denomination
was available, the table size went up to 2470005 entries, still half
of what it would have been from precomputing everything.
Can the memoization "see" that once cc(N-5,5) is known,[..]
cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
Can the memoization "see" that once cc(N-5,5) is known,[..]
cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
I never used naive memoization much because the table size becomes impractical for real-world problems. If it is known how to compress
N-dim tables (in hopefully a simple way), I'd be very interested
to know how it is done (even when N=1).
(compile 'fib-rec)#<vm fun: 2 param>
(pprof (fib 200))malloc bytes: 74356
(trace fib-rec)nil
(fib 8)(fib-rec (8 #(nil nil nil))
If it is known how to compress N-dim tables (in hopefully a simple
way), I'd be very interested to know how it is done (even when N=1).
In conclusion, while memoized Fibonacci is great textbook
example for illustrating the power of memoization,
it woudl behoove the textbooks to also add the observation
that it still works with a two-slot LRU cache.
Can the memoization "see" that once cc(N-5,5) is known,
cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
in the iterative version),
and that cc(N,5) becomes cc(N,4)+cc(N-5,5) . That is chill.
(The iterative version also saves some 50% of the time going 50 -> 1)
To convert Fibonacci to a linear recursion one needs to keep two
(usually the two previous) values
albert@cherry.(none) (albert) writes:
Can the memoization "see" that once cc(N-5,5) is known,
cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
in the iterative version),
I'm not sure what you mean by that. The memoization scheme proceeds >recursively until it reaches a place where it knows how to get a
numerical answer. Then it computes and remembers that answer, so it can
use it the next time the calculation gets there.
and that cc(N,5) becomes cc(N,4)+cc(N-5,5) . That is chill.
Yes that is right.
(The iterative version also saves some 50% of the time going 50 -> 1)
I don't think I understand your iterative algorithm, but I think there
is one that actually doesn't have to remember the calculation results
all the way back to the beginning. It only has to remember the last 100
(or whatever the size of the largest coin is), and similarly for the
other coin sizes. I'll see if I can code that.
On 2024-01-04, Kaz Kylheku <433-929-6894@kylheku.com> wrote:
In conclusion, while memoized Fibonacci is great textbook
example for illustrating the power of memoization,
it woudl behoove the textbooks to also add the observation
that it still works with a two-slot LRU cache.
However, due to the function activation frames, memory
use is still O(n).
--
Mastodon: @Kazinator@mstdn.ca
On 2024-01-04, mhx <mhx@iae.nl> wrote:+1
Can the memoization "see" that once cc(N-5,5) is known,[..]
cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
I never used naive memoization much because the table size becomes
impractical for real-world problems. If it is known how to compress
N-dim tables (in hopefully a simple way), I'd be very interested
to know how it is done (even when N=1).
In some cases, eviction may be possible (i.e. caching).
If we consider memoized Fibonacci, once it has calculated fib(7),
it's not going to have to calculate fib(5) again. That value could
be evacuated from the table. In fact, I suspect that a LRU table
with only several entries would serve.
This is easily confirmed code-wise, using TXR Lisp:
$ txr -i fib.tl
Read-eval-disagree-downvote-insult treadmill initialized! Enter an expression: >1> (compile 'fib-rec)
#<vm fun: 2 param>
(pprof (fib 200))malloc bytes: 74356
gc heap bytes: 66272
total: 140628
milliseconds: 2
453973694165307953197296969697410619233826
(trace fib-rec)nil
(fib 8)(fib-rec (8 #(nil nil nil))
(fib-rec (6 #(nil nil nil))
(fib-rec (4 #(nil nil nil))
(fib-rec (2 #(nil nil nil))
(fib-rec (0 #(nil nil nil))
1)
(fib-rec (1 #(nil nil nil))
1)
2)
(fib-rec (3 #((2 . 2) nil nil))
(fib-rec (1 #((2 . 2) nil nil))
1)
(fib-rec (2 #((2 . 2) nil nil))
2)
3)
5)
(fib-rec (5 #((4 . 5) (3 . 3) (2 . 2)))
(fib-rec (3 #((4 . 5) (3 . 3) (2 . 2)))
3)
(fib-rec (4 #((4 . 5) (3 . 3) (2 . 2)))
5)
8)
13)
(fib-rec (7 #((6 . 13) (5 . 8) (4 . 5)))
(fib-rec (5 #((6 . 13) (5 . 8) (4 . 5)))
8)
(fib-rec (6 #((6 . 13) (5 . 8) (4 . 5)))
13)
21)
34)
34
Calculating (fib 200) is instantaneous. The trace shows
that the caching with a three element table is sufficient;
E.g. once (fib-rec 5) is calculated, the second time
it is encountered, it no longer recurses to (fib-rec 3)
and (fib-rec 4); it just returns 8.
This is the code in fib.tl:
(defun fib-rec (n table)
(or (if (meq n 0 1) 1)
(cdr [find n table : car])
(flow (+ (fib-rec (ppred n) table)
(fib-rec (pred n) table))
(let f)
(cons n)
(set [(nrot table -1) 0])
(ret f))))
(defun fib (n)
(fib-rec n (vector 3)))
The auxiliary table parameter always holds the same 3 element
vector object throughout the recursion. The vector holds
cons pairs of the form (cons n (fib n)), thus mapping Fibonacci
indices to values.
Whenever there is a table miss, the nrot function is used to
rotate the table entries down, and the new value is placed
in the first element, [table 0].
In fact, a two element table is sufficient; if we change
the (vector 3) to (vector 2), the performance is not affected.
Moreover, it deosn't matter whether the recursive step is
(+ (fib-rec (ppred n) table) (fib-rec (pred n) table)) or
(+ (fib-rec (pred n) table) (fib-rec (ppred n) table));
a two-element cache is sufficient. (TXR Lisp is strictly
evaluated, left-to-right, like Common Lisp, so if we
vary the order of the arguments to +, we can control the
order in which the recursive calls take place.)
In conclusion, while memoized Fibonacci is great textbook
example for illustrating the power of memoization,
it woudl behoove the textbooks to also add the observation
that it still works with a two-slot LRU cache.
Mastodon: @Kazinator@mstdn.ca
You have managed to arrive at this iterative version, that keeps
track of only two values.
Sometimes, the Fibonacci recurrence is expressed as a matrix
multiplication on a state vector [a b]
[1 1][a] => [a + b]
[1 0][b] [ a ]
a receives the sum of a + b, while b receives the prior value of a.
Raising this matrix to an arbitrary integer power is how we arrive
at the closed form solution.
Kaz Kylheku wrote:
Sometimes, the Fibonacci recurrence is expressed as a matrix
multiplication on a state vector [a b]
[1 1][a] => [a + b]
[1 0][b] [ a ]
a receives the sum of a + b, while b receives the prior value of a.
Raising this matrix to an arbitrary integer power is how we arrive
at the closed form solution.
Really nice! I'll keep this general idea in a safe place.
(/ (fib 50) (fib 49))1.61803398874989
On 2024-01-05, mhx <mhx@iae.nl> wrote:
Kaz Kylheku wrote:
Sometimes, the Fibonacci recurrence is expressed as a matrix
multiplication on a state vector [a b]
[1 1][a] => [a + b]
[1 0][b] [ a ]
a receives the sum of a + b, while b receives the prior value of a.
Raising this matrix to an arbitrary integer power is how we arrive
at the closed form solution.
Really nice! I'll keep this general idea in a safe place.
You don't have to; it's found in textbooks.
Using linear algebra techniques found in introductory texts, you can do
this this:
n
[1 1]
[1 0]
The details escape me, much like a departing youthful appearance, but basically we can factor it into a form
n
U X V
where U, X, V are all 2x2 matrices. The middle one X is special
in that it is diagonal (consisting of a diagonal trace of eigenvalues).
A diagonal matrix is easy to exponentiate.
Something like that.
That gives us that closed form formula for Fibonacci with those
square roots of five in it.
Closely relating to the Golden Ratio! Which is (/ (+ 1 (sqrt 5)) 2).
The ratio of successive values of the Fibonacci numbers approaches
the golden ratio.
(/ (fib 50) (fib 49))1.61803398874989
Kaz Kylheku wrote:
Closely relating to the Golden Ratio! Which is (/ (+ 1 (sqrt 5)) 2).
The ratio of successive values of the Fibonacci numbers approaches
the golden ratio.
Almost like a party trick. I love to find elegant techniques in
unrelated fields of research that solve (my) practical problems in
unexpected ways.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 429 |
Nodes: | 16 (2 / 14) |
Uptime: | 116:13:31 |
Calls: | 9,056 |
Calls today: | 3 |
Files: | 13,396 |
Messages: | 6,016,475 |