• recursion is silly

    From none) (albert@21:1/5 to All on Wed Dec 27 16:17:58 2023
    XPost: comp.lang.lisp

    Recursion is silly (if you can avoid it).

    I recently draw attention to a toy lisp in Forth.
    The example program was a scheme program that counts
    how many changes in coins are possible for e.g. 100 dollarcent.
    The lisp had to improved to be able to compile the program.
    The improved version can hardly manage an input of 100 and takes
    a noticable time, 200 and following fails.

    A more sane program fills in a table where each entry is derived
    from previous entries. A possibility to change for 100 cent is e.g.
    adding 10 cent to the possibilities for 90 cent.
    This is a kind of reversed dynamic programming.

    In Forth it looks like.
    \ --------------------- 8< ---------------------------
    WANT VALUE
    : _ 0 ; \ Don't care value
    CREATE kind-of-coins 1 , 5 , 10 , 25 , 50 , 0 ,

    _ VALUE limit
    _ VALUE aap

    \ One possibility to change 0 cent, others initially zero.
    : init 1 , limit 0 DO 0 , LOOP ;

    : aap[] CELLS aap + ;

    \ Add possibilities for denomination to `aap
    : add limit 1+ OVER DO I OVER - aap[] @ I aap[] +! LOOP DROP ;
    : doit 1 ARG[] EVALUATE TO limit
    HERE TO aap init
    kind-of-coins BEGIN $@ DUP WHILE add REPEAT DROP
    limit aap[] ? CR ;

    \ --------------------- 8< ---------------------------
    You may have not have to want VALUE.
    $@ is another name for @+ / OUNT .
    If you can't handle command line arguments, you can replace
    `` 1 ARG[] EVALUATE '' with the limit you want.
    Around 70 dollar you get overflow in 32 bit systems.
    It is hard to measure speed in 32 bit systems,
    suffice it to say that is certainly under 10 ms.

    My forth (lina) is not the fastest, however

    time coinsusa 1,000,000
    666793341266850001
    real 0m0.353s
    user 0m0.281s
    sys 0m0.029s

    For reference this is the scheme program.
    \ --------------------- 8< ---------------------------
    (define (count-change amount)
    (cc amount 5))
    (define (cc amount kinds-of-coins)
    (cond ((= amount 0) 1)
    ((or (< amount 0) (= kinds-of-coins 0)) 0)
    (else (+ (cc amount
    (- kinds-of-coins 1))
    (cc (- amount
    (first-denomination kinds-of-coins))
    kinds-of-coins)))))
    (define (first-denomination kinds-of-coins)
    (cond ((= kinds-of-coins 1) 1)
    ((= kinds-of-coins 2) 5)
    ((= kinds-of-coins 3) 10)
    ((= kinds-of-coins 4) 25)
    ((= kinds-of-coins 5) 50)))


    (display (count-change 100))
    (newline)
    \ --------------------- 8< ---------------------------

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julieta Shem@21:1/5 to albert@cherry. on Wed Dec 27 14:28:50 2023
    XPost: comp.lang.lisp

    albert@cherry.(none) (albert) writes:

    I recently draw attention to a toy lisp in Forth.
    The example program was a scheme program that counts
    how many changes in coins are possible for e.g. 100 dollarcent.

    By the way, this is a well-known problem likely very well known by
    everyone in this news group. The scheme-procedure you presented is from section 1.2.2 in

    Structure and Interpretation of Computer Programs
    Gerald J. Sussman, Harold Abelson, Julie Sussman

    The lisp had to improved to be able to compile the program. The
    improved version can hardly manage an input of 100 and takes a
    noticable time, 200 and following fails.

    A more sane program fills in a table where each entry is derived
    from previous entries.

    The scheme version you presented builds such a table --- it's called the
    stack of execution. (Just another name for ``table''.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julieta Shem@21:1/5 to LIT on Wed Dec 27 14:33:46 2023
    XPost: comp.lang.lisp

    zbigniew2011@gmail.com (LIT) writes:

    Recursion is silly (if you can avoid it).

    Not really.

    Recursion is elegant — but actually isn't any
    more efficient than iteration, and of course
    it requires stack space.

    And yes, most of the time (or always?) there
    does exist iterative counterpart solution

    What do you mean iterative? For instance, Gerald Sussman and Harold
    Abelson classify as `iterative' the procedure

    (def (f n [acc 1])
    (if (= n 1)
    1
    (f (dec n) (* acc n)))),

    which is self-referential.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to albert@cherry. on Wed Dec 27 10:03:48 2023
    XPost: comp.lang.lisp

    albert@cherry.(none) (albert) writes:
    This is a kind of reversed dynamic programming.

    That is usually called memoization.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to albert@cherry. on Wed Dec 27 17:18:46 2023
    albert@cherry.(none) (albert) writes:
    A more sane program fills in a table where each entry is derived
    from previous entries. A possibility to change for 100 cent is e.g.
    adding 10 cent to the possibilities for 90 cent.
    This is a kind of reversed dynamic programming.

    What is reversed about this?

    - anton
    --
    M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
    comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
    New standard: https://forth-standard.org/
    EuroForth 2023: https://euro.theforth.net/2023

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From minforth@21:1/5 to none on Wed Dec 27 18:43:55 2023
    XPost: comp.lang.lisp

    none wrote:

    Recursion is silly (if you can avoid it).

    I recently draw attention to a toy lisp in Forth.
    The example program was a scheme program that counts
    how many changes in coins are possible for e.g. 100 dollarcent.
    The lisp had to improved to be able to compile the program.
    The improved version can hardly manage an input of 100 and takes
    a noticable time, 200 and following fails.

    IOW educated brute force can beat elegant recursion.

    You are right in a brutish

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to Paul Rubin on Wed Dec 27 12:22:31 2023
    XPost: comp.lang.lisp

    Paul Rubin <no.email@nospam.invalid> writes:
    This is a kind of reversed dynamic programming.
    That is usually called memoization.

    Added: in Scheme, it can be idiomatic to implement memoization using
    force and delay. So for the coin problem you'd have a lookup table
    indexed by the amount to be changed, and the sizes of coins available,
    giving the number of ways to change that amount with those coins.

    Each slot in the table would say something like (delay (cc amount coins))
    where cc is a recursive function that would return a delayed sum of
    the different table lookups corresponding to different breakdowns of
    the amount. Finally, you would force the table entries for the amounts 1,2...100 or whatever.

    I haven't tried this so I may be missing something, but the above
    follows a common pattern.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to Paul Rubin on Wed Dec 27 13:59:22 2023
    XPost: comp.lang.lisp

    Paul Rubin <no.email@nospam.invalid> writes:
    I haven't tried this so I may be missing something, but the above
    follows a common pattern.

    I still haven't worked out the details of the force/delay approach, but
    here (at bottom) is a Scheme version that implements memoization using a
    hash table by brute force. 100 cents and 200 cents take no perceptible
    time.

    I ran it with Guile 3.0 (compilation enabled) and got these results on a
    small Hetzner VM (number of cents to be changed, result, cpu time in
    seconds, and ram consumption in KB):

    |---------+------------------------+-------+--------|
    | amt | result | sec | KB |
    |---------+------------------------+-------+--------|
    | 100 | 293 | 0.01 | 9264 |
    | 200 | 2728 | 0.01 | 9340 |
    | 1000 | 2103596 | 0.01 | 10080 |
    | 10000 | 139946140451 | 0.13 | 15176 |
    | 100000 | 13398445413854501 | 1.25 | 67564 |
    | 500000 | 41707305668679272501 | 5.76 | 293964 |
    | 1000000 | 1333983445341383545001 | 15.24 | 580696 |
    |---------+------------------------+-------+--------|

    ================================================================

    (define (singleton? lst)
    (and (not (null? lst))
    (null? (cdr lst))))

    (define allcoins '(1 5 10 25 50 100))

    (define (cc amount coins)
    (cond ((= amount 0) 1)
    ((< amount 0) 0)
    ((null? coins) 0)
    ((singleton? coins)
    (if (= (remainder amount (car coins)) 0)
    1
    0))
    (#t (+ (cc (- amount (car coins)) coins)
    (cc amount (cdr coins))))))

    (define (memoize f)
    (define a (make-hash-table))
    (define (g . args)
    (let ((v (hash-ref a args)))
    (if v v (hash-set! a args (apply f args)))))
    g)

    (set! cc (memoize cc))

    (define amt (string->number (cadr (command-line))))

    (display (cc amt allcoins))
    (display "\n")

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to All on Wed Dec 27 17:44:39 2023
    XPost: comp.lang.lisp

    On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
    wrote:

    Recursion is silly (if you can avoid it).

    Not really.

    Recursion is elegant — but actually isn't any
    more efficient than iteration, and of course
    it requires stack space.

    Not necessarily. Scheme compilers actually are *required* to turn
    self recursion into a loop (except in the case of a supported "debug"
    mode).

    I think some of the Lisps can do it also - at least with
    (optimize (debug 0) (space 3))


    And yes, most of the time (or always?) there
    does exist iterative counterpart solution

    Always. Given enough resources (which could be code or memory space),
    any recursion can be turned into an equivalent loop, and vice versa.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to mhx on Wed Dec 27 14:51:24 2023
    XPost: comp.lang.lisp

    mhx@iae.nl (mhx) writes:
    I guess you are computing something different?

    Aha, one difference I notice is that Albert's code doesn't have a 100
    cent coin. I included that because we have them in the US, though they
    aren't used much except for in parking meters. They were introduced
    partly for use in vending machines, but those now tend to have dollar
    bill slots and sometimes also take credit cards.

    Let me try again:

    |---------+--------------------+-------+--------|
    | 100 | 292 | 0.13 | 33644 |
    | 200 | 2435 | 0.00 | 9412 |
    | 1000 | 801451 | 0.01 | 9852 |
    | 10000 | 6794128501 | 0.05 | 14316 |
    | 100000 | 66793412685001 | 1.11 | 63904 |
    | 500000 | 41682501983425001 | 6.07 | 240260 |
    | 1000000 | 666793341266850001 | 10.67 | 475044 |
    |---------+--------------------+-------+--------|

    So we're getting the same results for 1000000 now, but my version is
    still a lot slower. Maybe the memoization is not working right. Hmm.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to George Neuner on Wed Dec 27 23:12:07 2023
    XPost: comp.lang.lisp

    On 2023-12-27, George Neuner <gneuner2@comcast.net> wrote:
    On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
    wrote:

    Recursion is silly (if you can avoid it).

    Not really.

    Recursion is elegant — but actually isn't any
    more efficient than iteration, and of course
    it requires stack space.

    Not necessarily. Scheme compilers actually are *required* to turn
    self recursion into a loop (except in the case of a supported "debug"
    mode).

    Careful there: not all self-recursion is tail recursion!

    For instance, the obvious way to write a recursive factorial fails
    to be tail-recursive, but can be rewritten to tail from with
    accumulator passing.

    (I know you know all this, likely better than me, but `tis the
    season to have your egg nogged.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to mhx on Wed Dec 27 14:37:58 2023
    XPost: comp.lang.lisp

    mhx@iae.nl (mhx) writes:
    I guess you are computing something different?
    The time for 1,000,000 is also quite large, 15.24s vs 10ms (with iForth).

    Interesting, what values do you get for 100, 200, etc.? It could be
    that my method is wrong. I just did it off the top of my head and may
    have done something dumb.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mhx@21:1/5 to All on Wed Dec 27 22:29:43 2023
    XPost: comp.lang.lisp

    1000000 | 1333983445341383545001 | 15.24 | 580696

    That is quite different from the result none showed.
    | time coinsusa 1,000,000
    | 666793341266850001
    | real 0m0.353s

    I guess you are computing something different?
    The time for 1,000,000 is also quite large, 15.24s vs 10ms (with iForth).

    -marcel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to gneuner2@comcast.net on Thu Dec 28 00:37:16 2023
    XPost: comp.lang.lisp

    In article <pu8poi13a1m0lq1am4a2c4fpop5nro1emh@4ax.com>,
    George Neuner <gneuner2@comcast.net> wrote:
    On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
    wrote:

    Recursion is silly (if you can avoid it).

    Not really.

    Recursion is elegant — but actually isn't any
    more efficient than iteration, and of course
    it requires stack space.

    Not necessarily. Scheme compilers actually are *required* to turn
    self recursion into a loop (except in the case of a supported "debug"
    mode).

    I think some of the Lisps can do it also - at least with
    (optimize (debug 0) (space 3))


    And yes, most of the time (or always?) there
    does exist iterative counterpart solution

    Always. Given enough resources (which could be code or memory space),
    any recursion can be turned into an equivalent loop, and vice versa.

    In the Knuth TAOCP you cannot find recursive algorithms. They are
    intended to run fast, and algorithms are specified with explicit
    stacks. Of course the size of those stacks are specified.

    Groetjes Albert

    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to mhx on Thu Dec 28 01:07:10 2023
    XPost: comp.lang.lisp

    In article <feea51c7336d6888f6d7b73e76cacebd@news.novabbs.com>,
    mhx <mhx@iae.nl> wrote:
    1000000 | 1333983445341383545001 | 15.24 | 580696

    That is quite different from the result none showed.
    | time coinsusa 1,000,000
    | 666793341266850001
    | real 0m0.353s

    I guess you are computing something different?
    The time for 1,000,000 is also quite large, 15.24s vs 10ms (with iForth).

    Rubin is apparently knowledgeable about USA coins.
    He corrected the coins table to contain 1 dollar coins. (What do I know.)
    That spoils the benchmark effect.
    Can you run the program again with 100 cent coins?
    Maybe with 100,000 because now a million overflows 64 bit.



    -marcel
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to Anton Ertl on Thu Dec 28 00:31:55 2023
    In article <2023Dec27.181846@mips.complang.tuwien.ac.at>,
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    albert@cherry.(none) (albert) writes:
    A more sane program fills in a table where each entry is derived
    from previous entries. A possibility to change for 100 cent is e.g.
    adding 10 cent to the possibilities for 90 cent.
    This is a kind of reversed dynamic programming.

    What is reversed about this?

    This problem is not the clearest example for explaining this.
    There are actually 5 levels of recursion in this problem,
    resulting in 5 iterations of `add.
    It is kind of reverse, for each (amount,coin level) there is a
    memoization required. But these are normally requested from the upper
    level, and there is a chance that particular pairs are not needed.
    That at least is the hope for dynamic programming.
    It all start with (100,5) .

    Using iteration it is more clear what we are doing.
    In the first place it is obvious that we need all amount's.
    A lisper proposes a memoization table of amount * level entries,
    that works probably.

    My strategy is to build the table for all amounts for level
    1. That is the first execution of `add. So the table is
    filled in starting with amount 1 and only cents.
    Compared to (100,5) starting from (0,1) this is reversed.
    For the second and later execution of the iterations it is possible
    to continue using the same table.
    Compared to memoization we save the lookup of the keys also.

    The table for change of 1,000,000 (10,000 dollar) is
    8 megabyte, not too bad. In Europe we have 1 2 5 10 20 50 100 200
    denomination. Multiplying the size of the table by 7
    begins to become unfavourable, not to speak of the overhead that the
    general lisp mechanism probably requires.

    I will not ask for a calculation for more than 1000000
    cents, so that the result fits in a single precision
    64 bit number.
    CHALLENGE
    Can anyone reproduce the count 666793341266850001 for
    1 million dollarcent in lisp ?

    (Tools used: a simple forth and some 20 lines of Forth code.
    It runs under a second. I wonder what iforth does. )


    - Anton

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to no.email@nospam.invalid on Thu Dec 28 00:59:52 2023
    XPost: comp.lang.lisp

    In article <8734vn9uit.fsf@nightsong.com>,
    Paul Rubin <no.email@nospam.invalid> wrote:
    Paul Rubin <no.email@nospam.invalid> writes:
    I haven't tried this so I may be missing something, but the above
    follows a common pattern.

    I still haven't worked out the details of the force/delay approach, but
    here (at bottom) is a Scheme version that implements memoization using a
    hash table by brute force. 100 cents and 200 cents take no perceptible
    time.

    I ran it with Guile 3.0 (compilation enabled) and got these results on a >small Hetzner VM (number of cents to be changed, result, cpu time in
    seconds, and ram consumption in KB):

    |---------+------------------------+-------+--------|
    | amt | result | sec | KB |
    |---------+------------------------+-------+--------|
    | 100 | 293 | 0.01 | 9264 |
    | 200 | 2728 | 0.01 | 9340 |
    | 1000 | 2103596 | 0.01 | 10080 |
    | 10000 | 139946140451 | 0.13 | 15176 |
    | 100000 | 13398445413854501 | 1.25 | 67564 |
    | 500000 | 41707305668679272501 | 5.76 | 293964 |
    | 1000000 | 1333983445341383545001 | 15.24 | 580696 |
    |---------+------------------------+-------+--------|

    ================================================================

    <SNIP>

    Impressed. My answer for 1000000 is actually wrong
    because I didn't know that there are 1 dollar coins.

    Improving my program:

    ~/euler: time coinsusa 100000
    13398445413854501

    real 0m0.095s
    user 0m0.079s
    sys 0m0.001s

    Memory footprint estimated 900 Kbyte.

    I see that you go beyond 64 bit numbers and I cannot
    calculate the million dollarcent challenge.
    The timing and foot print are slightly less as a reward
    for more analysis.
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to no.email@nospam.invalid on Thu Dec 28 01:26:22 2023
    XPost: comp.lang.lisp

    In article <87tto38djn.fsf@nightsong.com>,
    Paul Rubin <no.email@nospam.invalid> wrote:
    mhx@iae.nl (mhx) writes:
    I guess you are computing something different?

    Aha, one difference I notice is that Albert's code doesn't have a 100
    cent coin. I included that because we have them in the US, though they >aren't used much except for in parking meters. They were introduced
    partly for use in vending machines, but those now tend to have dollar
    bill slots and sometimes also take credit cards.

    Let me try again:

    |---------+--------------------+-------+--------|
    | 100 | 292 | 0.13 | 33644 |
    | 200 | 2435 | 0.00 | 9412 |
    | 1000 | 801451 | 0.01 | 9852 |
    | 10000 | 6794128501 | 0.05 | 14316 |
    | 100000 | 66793412685001 | 1.11 | 63904 |
    | 500000 | 41682501983425001 | 6.07 | 240260 |
    | 1000000 | 666793341266850001 | 10.67 | 475044 |
    |---------+--------------------+-------+--------|

    So we're getting the same results for 1000000 now, but my version is
    still a lot slower. Maybe the memoization is not working right. Hmm.

    You must be kidding. The memoization using standard tools impressed me. Compared to the result below it is only one order of magnitude.

    In the meantime I have made a double precision version:

    ~/euler: time coinsusad 1000000
    1333983445341383545001

    real 0m1.209s
    user 0m1.038s
    sys 0m0.021s

    A double precision version takes double the time,
    900 ms for the original challenge.
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to albert@cherry. on Wed Dec 27 23:31:30 2023
    XPost: comp.lang.lisp

    albert@cherry.(none) (albert) writes:
    You must be kidding. The memoization using standard tools impressed me. Compared to the result below it is only one order of magnitude.

    I see, so the algorithms are basically similar. I haven't examined your
    code closely yet. But I see you are using a much more efficient memo
    table than I am. Yours is indexed on the offset into the coin list,
    while mine uses the entire list. I might try adapting mine, maybe to
    use arrays instead of a hash table.

    I noticed I got an 8x speedup and 5x memory reduction by simply
    reversing the order of the coin list, i.e. using (100 50 25 10 5 1)
    instead of (1 5 10 25 50 100).

    In reality, 50 cent coins and dollar coins are rarely used in the US.
    50 cent coins are large enough to be cumbersome, and dollar coins never
    became popular because they are too easily confused with quarters (they
    are just slightly larger).

    Canada has all the coins that the US has, plus a 2 dollar coin, and they actually use them. France before the Euro had even more, I believe.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to no.email@nospam.invalid on Thu Dec 28 11:20:36 2023
    XPost: comp.lang.lisp

    In article <87plyq9419.fsf@nightsong.com>,
    Paul Rubin <no.email@nospam.invalid> wrote:
    albert@cherry.(none) (albert) writes:
    You must be kidding. The memoization using standard tools impressed me.
    Compared to the result below it is only one order of magnitude.

    I see, so the algorithms are basically similar. I haven't examined your
    code closely yet. But I see you are using a much more efficient memo
    table than I am. Yours is indexed on the offset into the coin list,
    while mine uses the entire list. I might try adapting mine, maybe to
    use arrays instead of a hash table.

    It is similar in the sense that at the end intermediate results
    are calculated in the same partial sums. There is a great difference
    that it is reversed. Memoization asks for the intermediate results e.g. 100,5 to be calculated. I start from the bottom and just guess that all
    intermediate results are needed.I start with calculating 1,1 2,1 3,1 ..
    This depend on analysing the problem.
    You will get nowhere by this method if you apply at eulerproject
    problem 502 (counting castles).


    I noticed I got an 8x speedup and 5x memory reduction by simply
    reversing the order of the coin list, i.e. using (100 50 25 10 5 1)
    instead of (1 5 10 25 50 100).

    In reality, 50 cent coins and dollar coins are rarely used in the US.
    50 cent coins are large enough to be cumbersome, and dollar coins never >became popular because they are too easily confused with quarters (they
    are just slightly larger).

    Canada has all the coins that the US has, plus a 2 dollar coin, and they >actually use them. France before the Euro had even more, I believe.

    France used denominations 10 centimes up till 10 Franc .
    The 10 Franc piece and 2 euro piece and the Can 2 dollar piece
    have comparable purchasing power. A the time the Netherlands
    had a 2.5 guilder coins, also comparable.
    Formally the Euro has a regular 1 2 5 10 20 50 100 200 series
    but the 1 and 2 cent pieces are no more used in the Netherlands.
    So 5 or 6 coins in actual use.

    Groetjes Albert

    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mhx@21:1/5 to All on Thu Dec 28 11:13:40 2023
    XPost: comp.lang.lisp

    From the FysForth days. It will run out of stack before it
    encounters out of memory, but it clocks less than 1ms when doing
    900000 count_change .

    Don't ask me how it works :--)

    -marcel


    DOC
    (* http://www.webcom.com/nazgul/change.html#gcc

    For the curious, here are the results computed for various amounts, using coins in
    denominations 1, 5, 10, 25 and 50. The ``answer'' column shows the number of ways
    found to make change for the given amount, the ``leaves'' column shows the number
    of leaf nodes in the tree recursion, and the ``calls'' column shows the total number
    of times the recursive procedure was called.

    (amount=)
    n answer leaves calls
    ---------------------------------------------
    50 50 786 1571
    100 292 7750 15499
    150 972 35888 71775
    200 2435 114795 229589
    250 5126 293666 587331
    300 9590 646296 1292591
    350 16472 1276080 2552159
    400 26517 2321013 4642025
    450 40570 3958690 7917379
    500 59576 6411306 12822611
    550 84580 9950656 19901311
    600 116727 14903135 29806269
    650 157262 21654738 43309475
    700 207530 30656060 61312119
    750 268976 42427296 84854591
    800 343145 57563241 115126481
    850 431682 76738290 153476579
    900 536332 100711438 201422875
    950 658940 130331280 260662559

    All timings (argument = 200) are in seconds on a 75 MHz Pentium running Linux 1.2.13
    with libc 5.0.9, except that CMUCL needed Linux 2.0.25 and libc 5.2.18, and MSW Logo
    was run under Windows 95.

    gcc Gnu C 0.05
    p2c P2C Pascal Translator 0.05
    a60 Algol 60 to C Translator 0.08
    cmucl CMU Common Lisp 0.09
    gcl Gnu Common Lisp 0.09
    scheme MIT Scheme 0.15
    swn MIT Scheme without Numerics 1.17
    scheme48 Scheme 48 3.65
    p4 P4 Pascal P-code Interpreter 7.31
    postscript Ghostscript 8.52
    emacs Emacs Lisp 12.27
    awk Gnu Awk 15.34
    orth Orthogonal 32.48
    tex TeX 46.49
    a60 Algol 60 Interpreter 69.69
    intercal INTERCAL 75.60
    ucblogo UCB Logo 214.00
    mswlogo MSW Logo 221.00
    logopascal Pascal in Logo 1049.00
    walk Lisp in Awk 43000.00

    *)
    ENDDOC


    ANEW -count_change

    #1500000 =: MAXSIZE

    CREATE _cc1 1 , HERE MAXSIZE CELLS ALLOT MAXSIZE CELLS ERASE
    CREATE _cc2 2 , HERE MAXSIZE CELLS ALLOT MAXSIZE CELLS ERASE
    CREATE _cc3 3 , HERE MAXSIZE CELLS ALLOT MAXSIZE CELLS ERASE
    CREATE _cc4 4 , HERE MAXSIZE CELLS ALLOT MAXSIZE CELLS ERASE
    CREATE _cc5 5 , HERE MAXSIZE CELLS ALLOT MAXSIZE CELLS ERASE

    CREATE _ccx 0 , _cc1 , _cc2 , _cc3 , _cc4 , _cc5 ,

    : 'cc _ccx []CELL @ []CELL ; ( amount kinds_of_coins -- addr )

    CREATE KOC 0 , 1 , 5 , #10 , #25 , #50 ,

    : first_denomination KOC []CELL @ ; ( kinds_of_coins -- n )

    The order of recursive calls is important!
    Stack overflow will follow if they are interchanged.
    : cc ( amount kinds_of_coins -- n )
    OVER 0= IF 2DROP 1 EXIT ENDIF
    OVER 0< IF 2DROP 0 EXIT ENDIF
    DUP 0= IF 2DROP 0 EXIT ENDIF
    2DUP 'cc DUP @ ?DUP IF >R 3DROP R> EXIT ENDIF
    R
    2DUP DUP >R first_denomination - R> RECURSE >R
    1- RECURSE
    R> +
    DUP R> ! ;

    : count_change ( amount -- u )
    DUP MAXSIZE >= ABORT" out of range"
    CR TIMER-RESET
    5 cc . .ELAPSED ;

    : count_changes ( -- )
    #1550 #50 DO CR I 5 .R TIMER-RESET
    I 5 cc 9 .R 2 SPACES .ELAPSED
    #50 +LOOP ;

    CR .( Try: count_changes)

    DOC
    (*
    P54C-166 iForth 1.11e
    FORTH> count_changes
    50 50 0.000 seconds elapsed.
    100 292 0.000 seconds elapsed.
    150 972 0.001 seconds elapsed.
    200 2435 0.000 seconds elapsed.
    250 5126 0.000 seconds elapsed.
    300 9590 0.001 seconds elapsed.
    350 16472 0.000 seconds elapsed.
    400 26517 0.000 seconds elapsed.
    450 40570 0.001 seconds elapsed.
    500 59576 0.000 seconds elapsed.
    550 84580 0.000 seconds elapsed.
    600 116727 0.001 seconds elapsed.
    650 157262 0.000 seconds elapsed.
    700 207530 0.000 seconds elapsed.
    750 268976 0.001 seconds elapsed.
    800 343145 0.001 seconds elapsed.
    850 431682 0.000 seconds elapsed.
    900 536332 0.001 seconds elapsed.
    950 658940 0.000 seconds elapsed.
    1000 801451 0.000 seconds elapsed.
    1050 965910 0.001 seconds elapsed.
    1100 1154462 0.001 seconds elapsed.
    1150 1369352 0.000 seconds elapsed.
    1200 1612925 0.001 seconds elapsed.
    1250 1887626 0.001 seconds elapsed.
    1300 2196000 0.000 seconds elapsed.
    1350 2540692 0.000 seconds elapsed.
    1400 2924447 0.001 seconds elapsed.
    1450 3350110 0.001 seconds elapsed.
    1500 3820626 0.000 seconds elapsed. ok
    *)
    ENDDOC

    EOF

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to mhx on Thu Dec 28 16:17:27 2023
    XPost: comp.lang.lisp

    In article <8af29958c24b1596a65d3d7f927f6211@news.novabbs.com>,
    mhx <mhx@iae.nl> wrote:
    From the FysForth days. It will run out of stack before it
    encounters out of memory, but it clocks less than 1ms when doing
    900000 count_change .

    Don't ask me how it works :--)

    It is a straightforward transsciption of the scheme code.


    -marcel


    DOC

    <SNIP>

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to no.email@nospam.invalid on Thu Dec 28 17:54:02 2023
    XPost: comp.lang.lisp

    In article <87plyq9419.fsf@nightsong.com>,
    Paul Rubin <no.email@nospam.invalid> wrote:
    albert@cherry.(none) (albert) writes:
    You must be kidding. The memoization using standard tools impressed me.
    Compared to the result below it is only one order of magnitude.

    I see, so the algorithms are basically similar. I haven't examined your
    code closely yet. But I see you are using a much more efficient memo
    table than I am. Yours is indexed on the offset into the coin list,
    while mine uses the entire list. I might try adapting mine, maybe to
    use arrays instead of a hash table.

    I noticed I got an 8x speedup and 5x memory reduction by simply
    reversing the order of the coin list, i.e. using (100 50 25 10 5 1)
    instead of (1 5 10 25 50 100).

    I get a 2x speedup by reversing the order of the coins.
    Reversing doesn't pay off until you go to the table in strides
    greater than one. In the first step you have to fill in the
    multiple of 50 cent with 1.
    Introducing a stride cost me 40 % slowdown.
    Then there is a subtlety about the stride. You can't use a stride
    of 10 by step 10, because you miss correcting 35 by the amount
    of 25 of the previous step. So the correct stride is the gcd of
    previous nominations.

    The difference is minimal:

    4c4,6
    < CREATE kind-of-coins 1 , 5 , 10 , 25 , 50 , 0 ,
    ---
    CREATE kind-of-coins 50 , 25 , 10 , 5 , 1 , 0 ,
    17c22,24
    < : add limit 1+ OVER DO I OVER - aap[] 2@ I aap[] +2! LOOP DROP ;
    ---
    : add DUP stride GCD TO stride
    limit 1+ OVER DO I OVER - aap[] 2@ I aap[] +2! stride +LOOP DROP ;

    <SNIP>
    [Leaving out the initialisation and declaration of `stride.]


    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to 433-929-6894@kylheku.com on Thu Dec 28 14:45:29 2023
    XPost: comp.lang.lisp

    On Wed, 27 Dec 2023 23:12:07 -0000 (UTC), Kaz Kylheku <433-929-6894@kylheku.com> wrote:

    On 2023-12-27, George Neuner <gneuner2@comcast.net> wrote:
    On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
    wrote:

    Recursion is silly (if you can avoid it).

    Not really.

    Recursion is elegant — but actually isn't any
    more efficient than iteration, and of course
    it requires stack space.

    Not necessarily. Scheme compilers actually are *required* to turn
    self recursion into a loop (except in the case of a supported "debug"
    mode).

    Careful there: not all self-recursion is tail recursion!

    Oops, you're right. I should have said Scheme is required to turn *tail-recursion* into a loop.

    However, it is true that any recursion can be turned into a loop, and
    the Gambit compiler can (with some restrictions) turn self recursions,
    and even some mutual recursions, into loops.


    For instance, the obvious way to write a recursive factorial fails
    to be tail-recursive, but can be rewritten to tail from with
    accumulator passing.

    (I know you know all this, likely better than me, but `tis the
    season to have your egg nogged.)

    I need a good egg-nogging once in a while 8-)
    Happy Holidays!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to dxf on Thu Dec 28 15:05:02 2023
    XPost: comp.lang.lisp

    On Thu, 28 Dec 2023 12:24:05 +1100, dxf <dxforth@gmail.com> wrote:

    On 28/12/2023 9:44 am, George Neuner wrote:
    On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
    wrote:

    Recursion is silly (if you can avoid it).

    Not really.

    Recursion is elegant — but actually isn't any
    more efficient than iteration, and of course
    it requires stack space.

    Not necessarily. Scheme compilers actually are *required* to turn
    self recursion into a loop (except in the case of a supported "debug"
    mode).

    I think some of the Lisps can do it also - at least with
    (optimize (debug 0) (space 3))

    That's kind of offensive (to the programmer) which - not unlike ChatGPT - >puts humans in a diminished role. So the question becomes will humans have >any role at all? Much has been said about 'general purpose subroutines'. >ISTM the only logical reason to have them and (by implication) standards
    is that they benefit automation.

    I apologize if I'm being dense, but I don't understand how it can be "offensive" for a compiler to optimize code.

    Code is not written only for the compiler - it is also intended for
    other programmers: e.g., the ones who will maintain your code in the
    future ... a set which includes you 6 months from now when you've
    forgotten how the code works.

    A recursion often can be easier to understand than its equivalent loop
    - which may involve many changing variables, and require auxilary data structures (such as a deliberate stack) to work. Even more so in
    cases of mulitple nested loops.

    Of course, YMMV.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to George Neuner on Thu Dec 28 22:11:42 2023
    XPost: comp.lang.lisp

    On 2023-12-28, George Neuner <gneuner2@comcast.net> wrote:
    On Wed, 27 Dec 2023 23:12:07 -0000 (UTC), Kaz Kylheku
    <433-929-6894@kylheku.com> wrote:

    On 2023-12-27, George Neuner <gneuner2@comcast.net> wrote:
    On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
    wrote:

    Recursion is silly (if you can avoid it).

    Not really.

    Recursion is elegant — but actually isn't any
    more efficient than iteration, and of course
    it requires stack space.

    Not necessarily. Scheme compilers actually are *required* to turn
    self recursion into a loop (except in the case of a supported "debug"
    mode).

    Careful there: not all self-recursion is tail recursion!

    Oops, you're right. I should have said Scheme is required to turn *tail-recursion* into a loop.

    However, it is true that any recursion can be turned into a loop, and

    Can it, by an algorithm? That's a theoretical question that's above my pay grade.

    (Or, well, below; if I were a CS academic, I'd make less.)

    It doesn't seem obvious to me. Consider a graph traversal (like garbage collection marking). When you visit N pointers of a graph node, only the
    last visit can be a tail call. The first N-1 have to return to that
    node.

    There is a mildly famous (in some circles) pointer-reversal trick that
    has been used to implement a stackless traversal for garbage collection.
    But it's not a simple code transformation; it mutates the nodes in order
    to, effectively, create a return stack within the graph itself.

    That could not possibly be a valid automatic compiler optimization.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to Kaz Kylheku on Thu Dec 28 15:06:55 2023
    XPost: comp.lang.lisp

    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    However, it is true that any recursion can be turned into a loop, and

    Can it, by an algorithm? That's a theoretical question that's above my
    pay grade.

    Yes of course, every compiler has to return recursion into a loop, since assembly language doesn't have recursion. You push the recursive args
    onto a stack, and then the loop pops args from the stack and processes
    them. Example (quicksort):

    push initial arg onto stack
    while (stack not empty):
    pop arg from stack
    create two partitions p1 and p2
    push both onto stack

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julieta Shem@21:1/5 to Paul Rubin on Thu Dec 28 22:54:56 2023
    XPost: comp.lang.lisp

    Paul Rubin <no.email@nospam.invalid> writes:

    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    However, it is true that any recursion can be turned into a loop, and

    Can it, by an algorithm? That's a theoretical question that's above my
    pay grade.

    Yes of course, every compiler has to return recursion into a loop, since assembly language doesn't have recursion. You push the recursive args
    onto a stack, and then the loop pops args from the stack and processes
    them. Example (quicksort):

    push initial arg onto stack
    while (stack not empty):
    pop arg from stack
    create two partitions p1 and p2
    push both onto stack

    I think what he had in mind is an iteration ``without a stack''. It's
    not clear what that would mean. If I translate a stack automaton to a
    finite state machine, am I using a stack? If I translate a recursive
    procedure into jump instructions and stack operations, am I not using
    recursion any longer? If I translate an in-place quicksort to a
    functional version, do I have a /new/ algorithm? (When are two
    expressions of an algorithm the same? When are they different?)

    It seems Stephen Cook has proved that whatever a stack automaton can do
    with complexity f(n) a finite automaton can do with O(n lg n). Knuth
    tells the story in

    https://www.youtube.com/watch?v=EE1R8FYUJm0&t=5280s.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to Julieta Shem on Thu Dec 28 18:39:34 2023
    XPost: comp.lang.lisp

    Julieta Shem <jshem@yaxenu.org> writes:
    I think what he had in mind is an iteration ``without a stack''. It's
    not clear what that would mean. If I translate a stack automaton to a
    finite state machine, am I using a stack?

    You can't in general turn it to a pure FSM. You need some kind of
    auxiliary storage. That is necessary by the time hierarchy theorem,
    the classification languges by the Chomsky hierarchy, etc. For example, consider a string of parentheses like (()((((()())))))))()())

    Are the parentheses balanced? You can tell by counting them, but a pure
    FSM can't do that.

    It seems Stephen Cook has proved that whatever a stack automaton can do
    with complexity f(n) a finite automaton can do with O(n lg n). Knuth
    tells the story in

    That must be a misunderstanding, but I'll watch the video when I get a
    chance. It's always worth listening to Knuth.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julieta Shem@21:1/5 to Paul Rubin on Fri Dec 29 00:13:13 2023
    XPost: comp.lang.lisp

    Paul Rubin <no.email@nospam.invalid> writes:

    Julieta Shem <jshem@yaxenu.org> writes:
    I think what he had in mind is an iteration ``without a stack''. It's
    not clear what that would mean. If I translate a stack automaton to a
    finite state machine, am I using a stack?

    You can't in general turn it to a pure FSM. You need some kind of
    auxiliary storage. That is necessary by the time hierarchy theorem,
    the classification languges by the Chomsky hierarchy, etc. For example, consider a string of parentheses like (()((((()())))))))()())

    Are the parentheses balanced? You can tell by counting them, but a pure
    FSM can't do that.

    A pure FSM cannot do that if you impose that the such string can grow to arbitrary sizes. The length of this string cannot grow to arbitrary
    sizes on /your/ computer, however, because your memory finite.
    Therefore, your computer is actually a finite state machine. So such
    finite state machine can check whether those parantheses are balanced up
    to the longest such string it can manage to consider.

    It seems Stephen Cook has proved that whatever a stack automaton can do
    with complexity f(n) a finite automaton can do with O(n lg n). Knuth
    tells the story in

    That must be a misunderstanding, but I'll watch the video when I get a chance. It's always worth listening to Knuth.

    Indeed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to Julieta Shem on Thu Dec 28 22:45:02 2023
    XPost: comp.lang.lisp

    Julieta Shem <jshem@yaxenu.org> writes:
    Therefore, your computer is actually a finite state machine.

    Usually these theory discussions are about abstract computers without
    such limitations. As one of my professors liked to say in his intro
    lecture about Turing machines, "we can mine the Andromeda galaxy for
    more tape".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to mhx on Fri Dec 29 12:32:40 2023
    XPost: comp.lang.lisp

    In article <8af29958c24b1596a65d3d7f927f6211@news.novabbs.com>,
    mhx <mhx@iae.nl> wrote:
    From the FysForth days. It will run out of stack before it
    encounters out of memory, but it clocks less than 1ms when doing
    900000 count_change .

    Don't ask me how it works :--)

    I said that it is straightforward translation of the scheme code.
    Not true. The huge tables remember results for using 1..5 coins.
    This is some clever memoization.


    -marcel


    DOC
    (*
    P54C-166 iForth 1.11e
    FORTH> count_changes
    50 50 0.000 seconds elapsed.
    100 292 0.000 seconds elapsed.
    150 972 0.001 seconds elapsed.
    200 2435 0.000 seconds elapsed.
    250 5126 0.000 seconds elapsed.
    300 9590 0.001 seconds elapsed.
    350 16472 0.000 seconds elapsed.
    400 26517 0.000 seconds elapsed.
    450 40570 0.001 seconds elapsed.
    500 59576 0.000 seconds elapsed.
    550 84580 0.000 seconds elapsed.
    600 116727 0.001 seconds elapsed.
    650 157262 0.000 seconds elapsed.
    700 207530 0.000 seconds elapsed.
    750 268976 0.001 seconds elapsed.
    800 343145 0.001 seconds elapsed.
    850 431682 0.000 seconds elapsed.
    900 536332 0.001 seconds elapsed.
    950 658940 0.000 seconds elapsed.
    1000 801451 0.000 seconds elapsed.
    1050 965910 0.001 seconds elapsed.
    1100 1154462 0.001 seconds elapsed.
    1150 1369352 0.000 seconds elapsed.
    1200 1612925 0.001 seconds elapsed.
    1250 1887626 0.001 seconds elapsed.
    1300 2196000 0.000 seconds elapsed.
    1350 2540692 0.000 seconds elapsed.
    1400 2924447 0.001 seconds elapsed.
    1450 3350110 0.001 seconds elapsed.
    1500 3820626 0.000 seconds elapsed. ok
    *)
    ENDDOC

    This table is misleading. Each step adds entries to an
    already filled table. I stepped up 10,000 until 1500,000
    with 2 .. 4 ms for each step printed.
    The proper conclusion is that the result is linear in its
    argument at a cost of less than 4 ms / 10,000 . (.4 sec for 1M).

    I run it in ciforth after including a preamble
    / ----------------------
    WANT DOC $-PREFIX ALIAS ABORT" MARK-TIME MARKER >=
    'CONSTANT ALIAS =:
    'MARKER ALIAS ANEW
    'MARK-TIME ALIAS TIMER-RESET
    : .ELAPSED ELAPSED .mS ;
    : []CELL SWAP CELLS + ;
    : ENDIF POSTPONE THEN ; IMMEDIATE
    : 3DROP DROP DROP DROP ;
    WANT CASE-SENSITIVE
    / ----------------------
    Use a 64 bit Forth with 64 mbyte dictionary.
    It runs in wina64.exe under linux wine, but
    I had to reassemble with a larger dict space.
    (This takes 200 mS wall clock time.)

    The huge tables result in a 60 Mbyte executable.
    Tips for ciforth users, use REALLOT.

    Try: count_changes OK
    count_changes

    10000 6794128501 7.110mS
    20000 107683177001 4.961mS
    30000 543427145501 4.802mS
    40000 1714786034001 3.586mS
    50000 4182519842501 3.164mS
    60000 8667388571001 3.300mS
    70000 16050152219501 2.966mS
    80000 27371570788001 3.240mS
    90000 43832404276501 3.399mS
    100000 66793412685001 3.258mS
    110000 97775356013501 3.522mS
    120000 138458994262001 3.251mS
    130000 190685087430501 3.383mS
    140000 256454395519001 3.275mS
    150000 337927678527501 3.547mS
    160000 437425696456001 3.393mS
    170000 557429209304501 3.515mS
    180000 700578977073001 2.928mS
    190000 869675759761501 3.402mS
    200000 1067680317370001 3.489mS
    210000 1297713409898501 2.988mS
    220000 1563055797347001 3.269mS
    230000 1867148239715501 3.264mS
    240000 2213591497004001 2.942mS
    250000 2606146329212501 3.658mS
    260000 3048733496341001 3.333mS
    270000 3545433758389501 3.405mS
    280000 4100487875358001 3.046mS
    290000 4718296607246501 3.419mS
    300000 5403420714055001 3.442mS
    310000 6160580955783501 3.232mS
    320000 6994658092432001 3.378mS
    330000 7910692884000501 3.013mS
    340000 8913886090489001 3.173mS
    350000 10009598471897501 3.441mS
    360000 11203350788226001 3.043mS
    370000 12500823799474501 3.538mS
    380000 13907858265643001 3.176mS
    390000 15430454946731501 3.177mS
    400000 17074774602740001 3.409mS
    410000 18847137993668501 3.360mS
    420000 20754025879517001 3.276mS
    430000 22802079020285501 3.116mS
    440000 24998098175974001 3.062mS
    450000 27349044106582501 3.225mS
    460000 29862037572111001 2.869mS
    470000 32544359332559501 3.417mS
    480000 35403450147928001 2.976mS
    490000 38446910778216501 3.063mS
    500000 41682501983425001 3.426mS
    510000 45118144523553501 3.060mS
    520000 48761919158602001 3.357mS
    530000 52622066648570501 2.975mS
    540000 56706987753459001 2.982mS
    550000 61025243233267501 3.442mS
    560000 65585553847996001 3.095mS
    570000 70396800357644501 3.619mS
    580000 75468023522213001 3.113mS
    590000 80808424101701501 3.095mS
    600000 86427362856110001 3.280mS
    610000 92334360545438501 2.917mS
    620000 98539097929687001 3.500mS
    630000 105051415768855501 2.984mS
    640000 111881314822944001 3.030mS
    650000 119038955851952501 3.624mS
    660000 126534659615881001 3.004mS
    670000 134378906874729501 3.327mS
    680000 142582338388498001 3.461mS
    690000 151155754917186501 3.314mS
    700000 160110117220795001 3.054mS
    710000 169456546059323501 3.138mS
    720000 179206322192772001 3.514mS
    730000 189370886381140501 3.062mS
    740000 199961839384429001 3.163mS
    750000 210990941962637501 3.412mS
    760000 222470114875766001 3.327mS
    770000 234411438883814501 3.268mS
    780000 246827154746783001 3.343mS
    790000 259729663224671501 3.344mS
    800000 273131525077480001 3.158mS
    810000 287045461065208501 2.933mS
    820000 301484351947857001 3.578mS
    830000 316461238485425501 2.943mS
    840000 331989321437914001 3.340mS
    850000 348081961565322501 3.571mS
    860000 364752679627651001 3.053mS
    870000 382015156384899501 3.537mS
    880000 399883232597068001 3.435mS
    890000 418370909024156501 3.386mS
    900000 437492346426165001 3.266mS
    910000 457261865563093501 3.295mS
    920000 477693947194942001 3.000mS
    930000 498803232081710501 3.447mS
    940000 520604520983399001 3.706mS
    950000 543112774660007501 3.191mS
    960000 566343113871536001 3.323mS
    970000 590310819377984501 3.336mS
    980000 615031331939353001 3.004mS
    990000 640520252315641501 3.444mS
    1000000 666793341266850001 3.221mS
    1010000 693866519552978501 3.353mS
    1020000 721755867934027001 2.813mS
    1030000 750477627169995501 2.939mS
    1040000 780048198020884001 3.796mS
    1050000 810484141246692501 3.198mS
    1060000 841802177607421001 3.376mS
    1070000 874019187863069501 3.103mS
    1080000 907152212773638001 3.456mS
    1090000 941218453099126501 3.167mS
    1100000 976235269599535001 3.258mS
    1110000 1012220183034863501 3.320mS
    1120000 1049190874165112001 3.261mS
    1130000 1087165183750280501 3.217mS
    1140000 1126161112550369001 3.256mS
    1150000 1166196821325377501 2.917mS
    1160000 1207290630835306001 3.509mS
    1170000 1249461021840154501 3.351mS
    1180000 1292726635099923001 3.219mS
    1190000 1337106271374611501 2.919mS
    1200000 1382618891424220001 2.917mS
    1210000 1429283616008748501 2.943mS
    1220000 1477119725888197001 2.928mS
    1230000 1526146661822565501 3.048mS
    1240000 1576384024571854001 3.566mS
    1250000 1627851574896062501 3.266mS
    1260000 1680569233555191001 3.456mS
    1270000 1734557081309239501 3.245mS
    1280000 1789835358918208001 3.336mS
    1290000 1846424467142096501 2.879mS
    1300000 1904344966740905001 3.341mS
    1310000 1963617578474633501 3.309mS
    1320000 2024263183103282001 3.019mS
    1330000 2086302821386850501 3.138mS
    1340000 2149757694085339001 3.475mS
    1350000 2214649161958747501 2.894mS
    1360000 2280998745767076001 3.421mS
    1370000 2348828126270324501 3.101mS
    1380000 2418159144228493001 3.092mS
    1390000 2489013800401581501 3.124mS
    1400000 2561414255549590001 3.361mS
    1410000 2635382830432518501 3.475mS
    1420000 2710942005810367001 3.309mS
    1430000 2788114422443135501 3.315mS
    1440000 2866922881090824001 3.066mS
    1450000 2947390342513432501 2.976mS
    1460000 3029539927470961001 3.120mS
    1470000 3113394916723409501 3.210mS
    1480000 3198978751030778001 3.194mS
    1490000 3286315031153066501 2.954mS
    1500000 3375427517850275001 3.288mS OK

    Rubin's program also can be run with an older version of Guile (2.0.13).
    My guess is that Rubin made a fresh start each time
    with the mysterious ?singleton call.
    Subsequent fetches from already cached results are not
    that interesting.

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to 433-929-6894@kylheku.com on Fri Dec 29 10:39:30 2023
    XPost: comp.lang.lisp

    On Thu, 28 Dec 2023 22:11:42 -0000 (UTC), Kaz Kylheku <433-929-6894@kylheku.com> wrote:

    On 2023-12-28, George Neuner <gneuner2@comcast.net> wrote:

    However, it is true that any recursion can be turned into a loop, ...

    Can it, by an algorithm? That's a theoretical question that's above my pay grade.

    With caveats.

    There is an algorithm to turn general recursions into loops. However,
    it only works in a relatively straight forward recursion environment,
    such as might be found in Pascal.

    Interleaved call chains, non-local returns, exception handling,
    continuations, co-routines, etc. - all can complicate the analysis to
    the point that it becomes [perhaps not impossible, but] quite
    impractical.


    Gambit Scheme tries, and in some cases it can succeed with non-tail
    self recursions and even simple mutual recursions. But the conditions
    under which the optimization is possible are very limited.


    (Or, well, below; if I were a CS academic, I'd make less.)

    It doesn't seem obvious to me. Consider a graph traversal (like garbage >collection marking). When you visit N pointers of a graph node, only the
    last visit can be a tail call. The first N-1 have to return to that
    node.

    You're absolutely right: in the general case one or more auxiliary
    data structures is required to preserve state for later use.

    [Aside: for marking GC, the best structure is a dequeue.]

    In general:

    The loop must manage its state explicitly, saving and restoring its
    variables as needed, whereas in recursion the function call/return
    mechanism handles most state implicitly.
    [This is, of course, as viewed from the perspective of source in a
    high(er) level language. In assembler, everything is explicit.]

    With a loop the state to be maintained no longer includes call chain
    or scope information, thus the amount of auxiliary state memory can be
    reduced.



    There is a mildly famous (in some circles) pointer-reversal trick that
    has been used to implement a stackless traversal for garbage collection.
    But it's not a simple code transformation; it mutates the nodes in order
    to, effectively, create a return stack within the graph itself.

    That could not possibly be a valid automatic compiler optimization.

    Since it can be done algorithmically, certainly it could be generated
    by a compiler. But optimizations have to be applicable to a variety
    of situations ... I have trouble seeing what more general uses pointer
    reversal could be applied to.

    GC - depending on your POV - can be seen either as a runtime optimizer
    or as a runtime error handler. But the operative word in both cases
    is "runtime".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to albert@cherry. on Fri Dec 29 14:09:52 2023
    XPost: comp.lang.lisp

    albert@cherry.(none) (albert) writes:
    Rubin's program also can be run with an older version of Guile (2.0.13).
    My guess is that Rubin made a fresh start each time

    Yes, I re-ran the program for each value, to get the timing.

    with the mysterious ?singleton call.

    singleton? just checks whether a list has exactly one element. It is
    used to detect the case where only one coin is available to make change
    with. In that case, there are either 0 ways or 1 way to make change,
    depending on whether the coin denomination divides the amount to be
    changed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Anton Ertl on Sat Dec 30 08:57:14 2023
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    \ SORTI (slightly slower than SORTM, slightly faster than SORTI2)

    : sorti ( addr u -- )
    cells over + 0 -rot begin ( 0 al1 ah1 .. aln ahn )
    begin
    over cell+ over u>= while
    2drop dup 0= if
    drop exit then
    repeat
    partition
    again ;

    Let's see if we can improve on this with the extended CASE (untested):

    : sorti3 ( addr u -- )
    cells over + 0 -rot case ( 0 al1 ah1 .. aln ahn )
    over cell+ over u< ?of partition contof
    2drop dup ?of contof
    endcase
    drop ;

    It looks neater and gets rid of the EXIT in the middle.

    - anton
    --
    M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
    comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
    New standard: https://forth-standard.org/
    EuroForth 2023: https://euro.theforth.net/2023

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Paul Rubin on Sat Dec 30 07:41:42 2023
    XPost: comp.lang.lisp

    Paul Rubin <no.email@nospam.invalid> writes:
    Yes of course, every compiler has to return recursion into a loop, since >assembly language doesn't have recursion.

    Assembly languages typically have call and return instructions, and
    the call instructions don't disallow calling a target that has been
    called further up in the call chain (unlike early Fortran), so,
    contrary to your claim, assembly language "has" recursion.

    You push the recursive args
    onto a stack, and then the loop pops args from the stack and processes
    them. Example (quicksort):

    push initial arg onto stack
    while (stack not empty):
    pop arg from stack
    create two partitions p1 and p2
    push both onto stack

    That's not how it normally works when a recursive quicksort is
    compiled to machine language. There is no "while (stack not empty)"
    loop. It's more like:

    sort:
    save some callee-saved registers (possibly including the return address)
    if the array contains more than one element then
    create two partitions p1 and p2
    save live caller-saved registers
    set up the parameters for sorting p1
    call sort
    restore live caller-saved registers
    save live caller-saved registers
    set up the parameters for sorting p2
    call sort
    restore live caller-saved registers
    restore the saved callee-saved registers
    return

    Note that the return returns to the caller of sort (whether it is one
    of the recursive calls, or a client call); it does not do a "stack is
    empty" check.

    The second call is typically a tail-call and can be converted into a
    jump. In this case you get:

    sort:
    save some callee-saved registers (possibly including the return address)
    if the array contains more than one element then
    create two partitions p1 and p2
    save live caller-saved registers
    set up the parameters for sorting p1
    call sort
    restore live caller-saved registers
    set up the parameters for sorting p2
    restore the saved callee-saved registers
    jump sort
    restore the saved callee-saved registers
    return

    Here we have a loop, but again, there is no checking of the stack
    depth, the remaining call is a recursive call, and the return returns
    to the caller (which may be the recursive call). It is possible to
    convert this into two loops (now ignoring the register management that
    the compiler has to do for assembly language, and using higher-level control-flow):

    sort:
    push array
    counter := 1
    while (counter>0)
    array := pop
    counter := counter-1
    while (array contains more than one element)
    create two partitions p1 and p2
    push p2
    counter := counter+1
    array = p1

    Concerning your pseudocode, I have implemented a number of quicksort
    variants in Forth several years ago. You can find all the variants in <http://www.complang.tuwien.ac.at/forth/programs/sort.fs>, and the
    results in <2013Aug15.191549@mips.complang.tuwien.ac.at>. SORTD, an
    iterative version that checked the stack depth was slowest. SORTM
    which used recursion for the first partition and a loop for the second partition was the fastest. I also examined an iterative version SORTI
    that used a sentinel on the stack instead of checking the stack depth;
    they were faster than SORTD, but slightly slower than SORTM. Using a
    counter instead of the sentinel would have resulted in more
    complicated stack handling.

    Here are the versions mentioned above:

    \ SORTM (best-performing)
    : sort2 ( al ah -- )
    begin
    over cell+ over u< while
    partition recurse
    repeat
    2drop ;

    : sortm ( addr u -- )
    cells over + sort2 ;

    \ SORTI (slightly slower than SORTM, slightly faster than SORTI2)

    : sorti ( addr u -- )
    cells over + 0 -rot begin ( 0 al1 ah1 .. aln ahn )
    begin
    over cell+ over u>= while
    2drop dup 0= if
    drop exit then
    repeat
    partition
    again ;

    \ SORTD (much slower than the others)
    : sortd ( addr u -- )
    depth 2 - >r
    cells over + begin ( al1 ah1 .. aln ahn )
    over cell+ over u< if
    partition
    else
    2drop
    then
    depth r@ <= until
    r> drop ;

    - anton
    --
    M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
    comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
    New standard: https://forth-standard.org/
    EuroForth 2023: https://euro.theforth.net/2023

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to no.email@nospam.invalid on Sat Dec 30 13:05:02 2023
    XPost: comp.lang.lisp

    In article <87le9e7wq8.fsf@nightsong.com>,
    Paul Rubin <no.email@nospam.invalid> wrote:
    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    However, it is true that any recursion can be turned into a loop, and

    Can it, by an algorithm? That's a theoretical question that's above my
    pay grade.

    Yes of course, every compiler has to return recursion into a loop, since >assembly language doesn't have recursion. You push the recursive args
    onto a stack, and then the loop pops args from the stack and processes
    them. Example (quicksort):

    push initial arg onto stack
    while (stack not empty):
    pop arg from stack
    create two partitions p1 and p2
    push both onto stack

    The question came up on reddit.
    How to make qsort iterative.
    Here is my answer:

    In Forth you can mostly trade the recursive return stack (remembering
    a complicated path through the software) with an explicit data stack.
    The data stack then expands and shrinks.
    The QSORT in (my) ciforth has a complicated PARTITION (like you) and
    then the RECURSION goes:
    4 : (QSORT) ( lo hi -- )
    5 PARTITION ( lo_1 hi_1 lo_2 hi_2)
    6 2DUP < IF RECURSE ELSE 2DROP THEN
    7 2DUP < IF RECURSE ELSE 2DROP THEN ;
    It is easy enough to start with a sentinel on the stack
    0 0 2SWAP
    and then proceed with
    BEGIN 2DUP OR WHILE
    PARTITION 2DUP >= IF 2DROP THEN
    REPEAT
    What you do is expanding and shrinking the paritions on stack, until
    you arrive at the sentinel. Note that Knuth in the Art of Computer
    Programming mostly avoids recursion, at the expense of an explicit
    stack. The difference is not great actually.
    See Knuth TAOCP volume 3 page 117. This algorithm Q. "An auxiliary
    stack with at most lg N entries is needed for temporary storage".

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to Anton Ertl on Sun Dec 31 11:56:35 2023
    XPost: comp.lang.lisp

    In article <2023Dec30.084142@mips.complang.tuwien.ac.at>,
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:

    \ SORTM (best-performing)
    : sort2 ( al ah -- )
    begin
    over cell+ over u< while
    partition recurse
    repeat
    2drop ;

    I hate it if one uses U< in comparing numbers that hapenns
    to be positive.
    Are you really expecting the index straddles the boundary e.g 9223372036854775807 and 9223372036854775809

    <SNIP>
    - anton
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to albert@cherry. on Sun Dec 31 16:01:20 2023
    XPost: comp.lang.lisp

    albert@cherry.(none) (albert) writes:
    In article <2023Dec30.084142@mips.complang.tuwien.ac.at>,
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:

    \ SORTM (best-performing)
    : sort2 ( al ah -- )
    begin
    over cell+ over u< while
    partition recurse
    repeat
    2drop ;

    I hate it if one uses U< in comparing numbers that hapenns
    to be positive.

    What makes you think that this code compares numbers with U<? It
    actually compares two addresses, and I certainly don't want my sorting
    programs to fail when the sorted array straddles the addresses
    2147483647 and 2147483648 on a 32-bit system. Note that an "a" prefix typically does not indicate a signed number.

    Followups set to comp.lang.forth

    - anton
    --
    M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
    comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
    New standard: https://forth-standard.org/
    EuroForth 2023: https://euro.theforth.net/2023

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to 433-929-6894@kylheku.com on Mon Jan 1 14:01:10 2024
    XPost: comp.lang.lisp

    In article <20231228140434.720@kylheku.com>,
    Kaz Kylheku <433-929-6894@kylheku.com> wrote:
    On 2023-12-28, George Neuner <gneuner2@comcast.net> wrote:
    On Wed, 27 Dec 2023 23:12:07 -0000 (UTC), Kaz Kylheku >><433-929-6894@kylheku.com> wrote:

    On 2023-12-27, George Neuner <gneuner2@comcast.net> wrote:
    On Wed, 27 Dec 2023 15:57:48 +0000, zbigniew2011@gmail.com (LIT)
    wrote:

    Recursion is silly (if you can avoid it).

    Not really.

    Recursion is elegant — but actually isn't any
    more efficient than iteration, and of course
    it requires stack space.

    Not necessarily. Scheme compilers actually are *required* to turn
    self recursion into a loop (except in the case of a supported "debug"
    mode).

    Careful there: not all self-recursion is tail recursion!

    Oops, you're right. I should have said Scheme is required to turn
    *tail-recursion* into a loop.

    However, it is true that any recursion can be turned into a loop, and

    Can it, by an algorithm? That's a theoretical question that's above my pay grade.

    (Or, well, below; if I were a CS academic, I'd make less.)

    It doesn't seem obvious to me. Consider a graph traversal (like garbage >collection marking). When you visit N pointers of a graph node, only the
    last visit can be a tail call. The first N-1 have to return to that
    node.

    There is a mildly famous (in some circles) pointer-reversal trick that
    has been used to implement a stackless traversal for garbage collection.
    But it's not a simple code transformation; it mutates the nodes in order
    to, effectively, create a return stack within the graph itself.

    That could not possibly be a valid automatic compiler optimization.

    Note that in the example that started this threat (the count change
    program) there was a double recursion, that cannot be handled
    readily with a tail recursion optimisation.
    On the other hand the alternative algorithm was easier to understand
    even to lispers who are thoroughly familiar with recursion.
    It runs fast without the need for external optimisation like a
    memoization.

    And may I stress it:
    This is my home brew compiler. It is a couple of thousand lines in
    assembler. The compiler is built by a single call to the fasm
    assembler, that takes less that 10 mS. No external library.
    Two nested loops, that is all. As small as the original algorithm.


    --
    TXRg ProgramminLanguage: http://nongnu.org/txr
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to Anton Ertl on Mon Jan 1 13:33:58 2024
    In article <2023Dec31.170120@mips.complang.tuwien.ac.at>,
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    albert@cherry.(none) (albert) writes:
    In article <2023Dec30.084142@mips.complang.tuwien.ac.at>,
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:

    \ SORTM (best-performing)
    : sort2 ( al ah -- )
    begin
    over cell+ over u< while
    partition recurse
    repeat
    2drop ;

    I hate it if one uses U< in comparing numbers that hapenns
    to be positive.

    What makes you think that this code compares numbers with U<? It
    actually compares two addresses, and I certainly don't want my sorting >programs to fail when the sorted array straddles the addresses
    2147483647 and 2147483648 on a 32-bit system. Note that an "a" prefix >typically does not indicate a signed number.

    Sorry, my bad. My qsort's all works with indices.
    That remark is out of place.


    Followups set to comp.lang.forth

    - anton

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to no.email@nospam.invalid on Mon Jan 1 15:45:59 2024
    XPost: comp.lang.lisp

    In article <87jzowirtb.fsf@nightsong.com>,
    Paul Rubin <no.email@nospam.invalid> wrote:
    albert@cherry.(none) (albert) writes:
    Rubin's program also can be run with an older version of Guile (2.0.13).
    My guess is that Rubin made a fresh start each time

    Yes, I re-ran the program for each value, to get the timing.

    with the mysterious ?singleton call.

    singleton? just checks whether a list has exactly one element. It is
    used to detect the case where only one coin is available to make change
    with. In that case, there are either 0 ways or 1 way to make change, >depending on whether the coin denomination divides the amount to be
    changed.

    Next guess.
    With
    (set! cc (memoize cc))
    you overwrite the `cc to henceforth to use the memoizer.

    That means that the second calculation of the same results
    is immediate and that larger results benefit from the
    smaller results that are calculated before them.
    True?

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to albert@cherry. on Wed Jan 3 10:17:43 2024
    XPost: comp.lang.lisp

    albert@cherry.(none) (albert) writes:
    you overwrite the `cc to henceforth to use the memoizer.

    That means that the second calculation of the same results
    is immediate and that larger results benefit from the
    smaller results that are calculated before them.
    True?

    Yes, that is exactly right, sorry about the slow response. The Python equivalent (untested) is:

    def memoize(f):
    def m(*args):
    if args in memo: return memo[args]
    memo[args] = f(*args)
    return memo[args]
    return m

    @memoize
    def cc(n, coins): ...

    There is an aspect of the Scheme version that confuses me slightly but I
    will ask some Scheme experts (I'm not one myself).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to albert@cherry. on Wed Jan 3 16:20:29 2024
    XPost: comp.lang.lisp

    albert@cherry.(none) (albert) writes:
    On the other hand the alternative algorithm was easier to understand
    even to lispers who are thoroughly familiar with recursion.
    It runs fast without the need for external optimisation like a
    memoization.

    I don't think I examined it, but is that the one that iteratively fills
    the lookup table before doing the requested value?

    I found in the memoizing Scheme version, after computing (cc 1000000),
    the memo table held 670004 entries, which is much less than the 5
    million that would be used by precomputing all the earlier entries.

    When I removed the optimization (I originally hadn't thought of it as
    one) of quickly checking divisibility when only one coin denomination
    was available, the table size went up to 2470005 entries, still half
    of what it would have been from precomputing everything.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to no.email@nospam.invalid on Thu Jan 4 10:52:28 2024
    XPost: comp.lang.lisp

    In article <87ttnu54qa.fsf@nightsong.com>,
    Paul Rubin <no.email@nospam.invalid> wrote:
    albert@cherry.(none) (albert) writes:
    On the other hand the alternative algorithm was easier to understand
    even to lispers who are thoroughly familiar with recursion.
    It runs fast without the need for external optimisation like a
    memoization.

    I don't think I examined it, but is that the one that iteratively fills
    the lookup table before doing the requested value?

    I found in the memoizing Scheme version, after computing (cc 1000000),
    the memo table held 670004 entries, which is much less than the 5
    million that would be used by precomputing all the earlier entries.

    When I removed the optimization (I originally hadn't thought of it as
    one) of quickly checking divisibility when only one coin denomination
    was available, the table size went up to 2470005 entries, still half
    of what it would have been from precomputing everything.

    Once you do memoization in Scheme, it is important that you mention
    the order of nominations (increasing or decreasing).
    The minimum should be N/50 + N/25 + N/10 + N/5 + N as far a I can see.
    (In the order of increasion nominations)
    I can't understand that there can be less that a million entries, unless ..
    Can the memoization "see" that once cc(N-5,5) is known,
    cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
    in the iterative version),
    and that cc(N,5) becomes cc(N,4)+cc(N-5,5) . That is chill.
    (The iterative version also saves some 50% of the time going 50 -> 1)

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mhx@21:1/5 to All on Thu Jan 4 11:58:58 2024
    XPost: comp.lang.lisp

    Can the memoization "see" that once cc(N-5,5) is known,
    cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
    [..]

    I never used naive memoization much because the table size becomes
    impractical for real-world problems. If it is known how to compress
    N-dim tables (in hopefully a simple way), I'd be very interested
    to know how it is done (even when N=1).

    -marcel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to mhx on Thu Jan 4 18:45:30 2024
    XPost: comp.lang.lisp

    On 2024-01-04, mhx <mhx@iae.nl> wrote:
    Can the memoization "see" that once cc(N-5,5) is known,
    cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
    [..]

    I never used naive memoization much because the table size becomes impractical for real-world problems. If it is known how to compress
    N-dim tables (in hopefully a simple way), I'd be very interested
    to know how it is done (even when N=1).

    In some cases, eviction may be possible (i.e. caching).

    If we consider memoized Fibonacci, once it has calculated fib(7),
    it's not going to have to calculate fib(5) again. That value could
    be evacuated from the table. In fact, I suspect that a LRU table
    with only several entries would serve.

    This is easily confirmed code-wise, using TXR Lisp:

    $ txr -i fib.tl
    Read-eval-disagree-downvote-insult treadmill initialized! Enter an expression:
    (compile 'fib-rec)
    #<vm fun: 2 param>
    (pprof (fib 200))
    malloc bytes: 74356
    gc heap bytes: 66272
    total: 140628
    milliseconds: 2
    453973694165307953197296969697410619233826
    (trace fib-rec)
    nil
    (fib 8)
    (fib-rec (8 #(nil nil nil))
    (fib-rec (6 #(nil nil nil))
    (fib-rec (4 #(nil nil nil))
    (fib-rec (2 #(nil nil nil))
    (fib-rec (0 #(nil nil nil))
    1)
    (fib-rec (1 #(nil nil nil))
    1)
    2)
    (fib-rec (3 #((2 . 2) nil nil))
    (fib-rec (1 #((2 . 2) nil nil))
    1)
    (fib-rec (2 #((2 . 2) nil nil))
    2)
    3)
    5)
    (fib-rec (5 #((4 . 5) (3 . 3) (2 . 2)))
    (fib-rec (3 #((4 . 5) (3 . 3) (2 . 2)))
    3)
    (fib-rec (4 #((4 . 5) (3 . 3) (2 . 2)))
    5)
    8)
    13)
    (fib-rec (7 #((6 . 13) (5 . 8) (4 . 5)))
    (fib-rec (5 #((6 . 13) (5 . 8) (4 . 5)))
    8)
    (fib-rec (6 #((6 . 13) (5 . 8) (4 . 5)))
    13)
    21)
    34)
    34

    Calculating (fib 200) is instantaneous. The trace shows
    that the caching with a three element table is sufficient;
    E.g. once (fib-rec 5) is calculated, the second time
    it is encountered, it no longer recurses to (fib-rec 3)
    and (fib-rec 4); it just returns 8.

    This is the code in fib.tl:

    (defun fib-rec (n table)
    (or (if (meq n 0 1) 1)
    (cdr [find n table : car])
    (flow (+ (fib-rec (ppred n) table)
    (fib-rec (pred n) table))
    (let f)
    (cons n)
    (set [(nrot table -1) 0])
    (ret f))))

    (defun fib (n)
    (fib-rec n (vector 3)))

    The auxiliary table parameter always holds the same 3 element
    vector object throughout the recursion. The vector holds
    cons pairs of the form (cons n (fib n)), thus mapping Fibonacci
    indices to values.

    Whenever there is a table miss, the nrot function is used to
    rotate the table entries down, and the new value is placed
    in the first element, [table 0].

    In fact, a two element table is sufficient; if we change
    the (vector 3) to (vector 2), the performance is not affected.

    Moreover, it deosn't matter whether the recursive step is
    (+ (fib-rec (ppred n) table) (fib-rec (pred n) table)) or
    (+ (fib-rec (pred n) table) (fib-rec (ppred n) table));
    a two-element cache is sufficient. (TXR Lisp is strictly
    evaluated, left-to-right, like Common Lisp, so if we
    vary the order of the arguments to +, we can control the
    order in which the recursive calls take place.)

    In conclusion, while memoized Fibonacci is great textbook
    example for illustrating the power of memoization,
    it woudl behoove the textbooks to also add the observation
    that it still works with a two-slot LRU cache.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to mhx on Thu Jan 4 10:56:40 2024
    XPost: comp.lang.lisp

    mhx@iae.nl (mhx) writes:
    If it is known how to compress N-dim tables (in hopefully a simple
    way), I'd be very interested to know how it is done (even when N=1).

    In the Scheme code I posted, the "coordinate list" is used as a hash key.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Kaz Kylheku on Thu Jan 4 19:49:10 2024
    XPost: comp.lang.lisp

    On 2024-01-04, Kaz Kylheku <433-929-6894@kylheku.com> wrote:
    In conclusion, while memoized Fibonacci is great textbook
    example for illustrating the power of memoization,
    it woudl behoove the textbooks to also add the observation
    that it still works with a two-slot LRU cache.

    However, due to the function activation frames, memory
    use is still O(n).

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to albert@cherry. on Thu Jan 4 13:22:13 2024
    XPost: comp.lang.lisp

    albert@cherry.(none) (albert) writes:
    Can the memoization "see" that once cc(N-5,5) is known,
    cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
    in the iterative version),

    I'm not sure what you mean by that. The memoization scheme proceeds recursively until it reaches a place where it knows how to get a
    numerical answer. Then it computes and remembers that answer, so it can
    use it the next time the calculation gets there.

    and that cc(N,5) becomes cc(N,4)+cc(N-5,5) . That is chill.

    Yes that is right.

    (The iterative version also saves some 50% of the time going 50 -> 1)

    I don't think I understand your iterative algorithm, but I think there
    is one that actually doesn't have to remember the calculation results
    all the way back to the beginning. It only has to remember the last 100
    (or whatever the size of the largest coin is), and similarly for the
    other coin sizes. I'll see if I can code that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to James Brakefield on Thu Jan 4 18:04:06 2024
    James Brakefield <jim.brakefield@ieee.org> writes:
    To convert Fibonacci to a linear recursion one needs to keep two
    (usually the two previous) values

    This is about the coins problem, not Fibonacci.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to no.email@nospam.invalid on Fri Jan 5 13:27:43 2024
    XPost: comp.lang.lisp

    In article <87le946bga.fsf@nightsong.com>,
    Paul Rubin <no.email@nospam.invalid> wrote:
    albert@cherry.(none) (albert) writes:
    Can the memoization "see" that once cc(N-5,5) is known,
    cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
    in the iterative version),

    I'm not sure what you mean by that. The memoization scheme proceeds >recursively until it reaches a place where it knows how to get a
    numerical answer. Then it computes and remembers that answer, so it can
    use it the next time the calculation gets there.

    and that cc(N,5) becomes cc(N,4)+cc(N-5,5) . That is chill.

    Yes that is right.

    (The iterative version also saves some 50% of the time going 50 -> 1)

    I don't think I understand your iterative algorithm, but I think there
    is one that actually doesn't have to remember the calculation results
    all the way back to the beginning. It only has to remember the last 100
    (or whatever the size of the largest coin is), and similarly for the
    other coin sizes. I'll see if I can code that.

    I thought of that too, and it doesn't work. Something inherent of
    two way recursion.

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to 433-929-6894@kylheku.com on Fri Jan 5 13:42:58 2024
    XPost: comp.lang.lisp

    In article <20240104114837.742@kylheku.com>,
    Kaz Kylheku <433-929-6894@kylheku.com> wrote:
    On 2024-01-04, Kaz Kylheku <433-929-6894@kylheku.com> wrote:
    In conclusion, while memoized Fibonacci is great textbook
    example for illustrating the power of memoization,
    it woudl behoove the textbooks to also add the observation
    that it still works with a two-slot LRU cache.

    However, due to the function activation frames, memory
    use is still O(n).

    In the coins example I did not use memoization per se, but
    rather a reverse memoization. Please check that the memory
    usage with the Forth example for FIB, is O(1), also with
    reverse memoization.
    [I had success with this technique in several of the 400 problems
    of projecteuler.net that I solved.]

    --
    Mastodon: @Kazinator@mstdn.ca

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From none) (albert@21:1/5 to 433-929-6894@kylheku.com on Fri Jan 5 13:37:25 2024
    XPost: comp.lang.lisp

    In article <20240104101004.858@kylheku.com>,
    Kaz Kylheku <433-929-6894@kylheku.com> wrote:
    On 2024-01-04, mhx <mhx@iae.nl> wrote:
    Can the memoization "see" that once cc(N-5,5) is known,
    cc(N-4,5) = cc(N-5,5), up till (cc(N-1,5) (which are zero till now
    [..]

    I never used naive memoization much because the table size becomes
    impractical for real-world problems. If it is known how to compress
    N-dim tables (in hopefully a simple way), I'd be very interested
    to know how it is done (even when N=1).

    In some cases, eviction may be possible (i.e. caching).

    If we consider memoized Fibonacci, once it has calculated fib(7),
    it's not going to have to calculate fib(5) again. That value could
    be evacuated from the table. In fact, I suspect that a LRU table
    with only several entries would serve.

    This is easily confirmed code-wise, using TXR Lisp:

    $ txr -i fib.tl
    Read-eval-disagree-downvote-insult treadmill initialized! Enter an expression: >1> (compile 'fib-rec)
    #<vm fun: 2 param>
    (pprof (fib 200))
    malloc bytes: 74356
    gc heap bytes: 66272
    total: 140628
    milliseconds: 2
    453973694165307953197296969697410619233826
    (trace fib-rec)
    nil
    (fib 8)
    (fib-rec (8 #(nil nil nil))
    (fib-rec (6 #(nil nil nil))
    (fib-rec (4 #(nil nil nil))
    (fib-rec (2 #(nil nil nil))
    (fib-rec (0 #(nil nil nil))
    1)
    (fib-rec (1 #(nil nil nil))
    1)
    2)
    (fib-rec (3 #((2 . 2) nil nil))
    (fib-rec (1 #((2 . 2) nil nil))
    1)
    (fib-rec (2 #((2 . 2) nil nil))
    2)
    3)
    5)
    (fib-rec (5 #((4 . 5) (3 . 3) (2 . 2)))
    (fib-rec (3 #((4 . 5) (3 . 3) (2 . 2)))
    3)
    (fib-rec (4 #((4 . 5) (3 . 3) (2 . 2)))
    5)
    8)
    13)
    (fib-rec (7 #((6 . 13) (5 . 8) (4 . 5)))
    (fib-rec (5 #((6 . 13) (5 . 8) (4 . 5)))
    8)
    (fib-rec (6 #((6 . 13) (5 . 8) (4 . 5)))
    13)
    21)
    34)
    34

    Calculating (fib 200) is instantaneous. The trace shows
    that the caching with a three element table is sufficient;
    E.g. once (fib-rec 5) is calculated, the second time
    it is encountered, it no longer recurses to (fib-rec 3)
    and (fib-rec 4); it just returns 8.

    This is the code in fib.tl:

    (defun fib-rec (n table)
    (or (if (meq n 0 1) 1)
    (cdr [find n table : car])
    (flow (+ (fib-rec (ppred n) table)
    (fib-rec (pred n) table))
    (let f)
    (cons n)
    (set [(nrot table -1) 0])
    (ret f))))

    (defun fib (n)
    (fib-rec n (vector 3)))

    The auxiliary table parameter always holds the same 3 element
    vector object throughout the recursion. The vector holds
    cons pairs of the form (cons n (fib n)), thus mapping Fibonacci
    indices to values.

    Whenever there is a table miss, the nrot function is used to
    rotate the table entries down, and the new value is placed
    in the first element, [table 0].

    In fact, a two element table is sufficient; if we change
    the (vector 3) to (vector 2), the performance is not affected.

    Moreover, it deosn't matter whether the recursive step is
    (+ (fib-rec (ppred n) table) (fib-rec (pred n) table)) or
    (+ (fib-rec (pred n) table) (fib-rec (ppred n) table));
    a two-element cache is sufficient. (TXR Lisp is strictly
    evaluated, left-to-right, like Common Lisp, so if we
    vary the order of the arguments to +, we can control the
    order in which the recursive calls take place.)

    In conclusion, while memoized Fibonacci is great textbook
    example for illustrating the power of memoization,
    it woudl behoove the textbooks to also add the observation
    that it still works with a two-slot LRU cache.
    +1

    You have managed to arrive at this iterative version, that keeps
    track of only two values.
    : FIB
    >R 0 1
    R> 0 ?DO SWAP OVER + 1 +LOOP
    DROP
    ;

    (Interested to see this coded in lisp. )


    Mastodon: @Kazinator@mstdn.ca

    Groetjes Albert
    --
    Don't praise the day before the evening. One swallow doesn't make spring.
    You must not say "hey" before you have crossed the bridge. Don't sell the
    hide of the bear until you shot it. Better one bird in the hand than ten in
    the air. First gain is a cat spinning. - the Wise from Antrim -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to albert@cherry on Fri Jan 5 18:53:02 2024
    XPost: comp.lang.lisp

    On 2024-01-05, albert@cherry.(none) (albert) <albert@cherry> wrote:
    You have managed to arrive at this iterative version, that keeps
    track of only two values.

    Well, yes; that is understood and where the intution comes from for why Memoization implicitly reduces the exponential explosion in Fibonacci.
    It must be effectively iterating on the recurrence in the
    straightforward way, since it takes linear time.

    The recurrence relation implies a two-element state; I guessed at
    a cache size of three items as a fudge factor; when that
    was debugged, I reduced to two.

    Sometimes, the Fibonacci recurrence is expressed as a matrix
    multiplication on a state vector [a b]

    [1 1][a] => [a + b]
    [1 0][b] [ a ]

    a receives the sum of a + b, while b receives the prior value of a.

    Raising this matrix to an arbitrary integer power is how we arrive
    at the closed form solution.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mhx@21:1/5 to Kaz Kylheku on Fri Jan 5 19:44:27 2024
    XPost: comp.lang.lisp

    Kaz Kylheku wrote:

    Sometimes, the Fibonacci recurrence is expressed as a matrix
    multiplication on a state vector [a b]

    [1 1][a] => [a + b]
    [1 0][b] [ a ]

    a receives the sum of a + b, while b receives the prior value of a.

    Raising this matrix to an arbitrary integer power is how we arrive
    at the closed form solution.

    Really nice! I'll keep this general idea in a safe place.

    -marcel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to mhx on Fri Jan 5 21:24:00 2024
    XPost: comp.lang.lisp

    On 2024-01-05, mhx <mhx@iae.nl> wrote:
    Kaz Kylheku wrote:

    Sometimes, the Fibonacci recurrence is expressed as a matrix
    multiplication on a state vector [a b]

    [1 1][a] => [a + b]
    [1 0][b] [ a ]

    a receives the sum of a + b, while b receives the prior value of a.

    Raising this matrix to an arbitrary integer power is how we arrive
    at the closed form solution.

    Really nice! I'll keep this general idea in a safe place.

    You don't have to; it's found in textbooks.

    Using linear algebra techniques found in introductory texts, you can do
    this this:

    n
    [1 1]
    [1 0]

    The details escape me, much like a departing youthful appearance, but
    basically we can factor it into a form

    n
    U X V

    where U, X, V are all 2x2 matrices. The middle one X is special
    in that it is diagonal (consisting of a diagonal trace of eigenvalues).
    A diagonal matrix is easy to exponentiate.

    Something like that.

    That gives us that closed form formula for Fibonacci with those
    square roots of five in it.

    Closely relating to the Golden Ratio! Which is (/ (+ 1 (sqrt 5)) 2).
    The ratio of successive values of the Fibonacci numbers approaches
    the golden ratio.

    (/ (fib 50) (fib 49))
    1.61803398874989

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mhx@21:1/5 to Kaz Kylheku on Sat Jan 6 09:15:12 2024
    XPost: comp.lang.lisp

    Kaz Kylheku wrote:

    On 2024-01-05, mhx <mhx@iae.nl> wrote:
    Kaz Kylheku wrote:

    Sometimes, the Fibonacci recurrence is expressed as a matrix
    multiplication on a state vector [a b]

    [1 1][a] => [a + b]
    [1 0][b] [ a ]

    a receives the sum of a + b, while b receives the prior value of a.

    Raising this matrix to an arbitrary integer power is how we arrive
    at the closed form solution.

    Really nice! I'll keep this general idea in a safe place.

    You don't have to; it's found in textbooks.

    It strongly connects to the control engineering approaches I use for
    analyzing recurrences in switched electronic systems.

    Using linear algebra techniques found in introductory texts, you can do
    this this:

    n
    [1 1]
    [1 0]

    The details escape me, much like a departing youthful appearance, but basically we can factor it into a form

    n
    U X V

    where U, X, V are all 2x2 matrices. The middle one X is special
    in that it is diagonal (consisting of a diagonal trace of eigenvalues).
    A diagonal matrix is easy to exponentiate.

    Tricky one to write from scratch :--) I implemented the Pade approximation
    (the stablest one) for it when my appearance was still youthful.

    Something like that.

    That gives us that closed form formula for Fibonacci with those
    square roots of five in it.

    Closely relating to the Golden Ratio! Which is (/ (+ 1 (sqrt 5)) 2).
    The ratio of successive values of the Fibonacci numbers approaches
    the golden ratio.

    Almost like a party trick. I love to find elegant techniques in
    unrelated fields of research that solve (my) practical problems in
    unexpected ways.

    (/ (fib 50) (fib 49))
    1.61803398874989

    -marcel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to mhx on Sat Jan 6 16:51:16 2024
    XPost: comp.lang.lisp

    mhx@iae.nl (mhx) writes:
    Kaz Kylheku wrote:
    Closely relating to the Golden Ratio! Which is (/ (+ 1 (sqrt 5)) 2).
    The ratio of successive values of the Fibonacci numbers approaches
    the golden ratio.

    Almost like a party trick. I love to find elegant techniques in
    unrelated fields of research that solve (my) practical problems in
    unexpected ways.

    Not sure if you wrote this about the golden ratio, but given the way
    the golden ratio phi is defined, I find it unsurprising that

    phi = lim(n->inf) fib(n+1)/fib(n)

    The definition of the golden ratio is

    (a+b)/a = a/b = phi (with a>b>0)

    or, as described in <https://en.wikipedia.org/wiki/Golden_ratio>:

    | a+b is to a as a is to b

    Given that's exactly how Fibonacci numbers are formed, if the sequence
    of ratios fib(n+1)/fib(n) converges, the result must be the golden
    ratio.

    It's equally unsurprising that for the Lucas numbers, which are formed
    in the same way, but with different base values (2 and 1 instead of 0
    and 1), the limit of L(n+1)/L(n) is also the golden ratio.

    Conversely, you can compute the nth Fibonacci number using a closed
    formula that involves phi.

    - anton
    --
    M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
    comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
    New standard: https://forth-standard.org/
    EuroForth 2023: https://euro.theforth.net/2023

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)