• Post Corona 2023 Fibertalk Challenge

    From Mostowski Collapse@21:1/5 to All on Thu Mar 2 11:39:14 2023
    No we don't want you to discuss fever from long-covid.
    Also your dietary customs, whether you eat certain
    carbohydrate isn't at stake. Its about these fibers:

    However, fibers use cooperative multitasking while
    threads use preemptive multitasking. https://en.wikipedia.org/wiki/Fiber_(computer_science)

    Try this little exercise:

    Engine 1 (Every Second):
    animates a clock in the GUI

    Engine 2 (Every Minute): calls async_process/4,
    fetches the actual ₿ bitcoin rate,
    and displays it

    Who is up to the challenge? Engine 2 is not allowed to
    block Engine 1. So some event based network I/O will be
    needed, which is usually not found in Prolog systems.

    For example SWI-Prolog has only:

    ws_receive(+WebSocket, -Message:dict)
    Receive the next message from WebSocket. https://www.swi-prolog.org/pldoc/man?section=websocket

    The above doesn’t look event based, there is no call-back option etc..

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Fri Mar 3 10:12:56 2023
    For fibers a higher context switching effort might
    be desirable. Because if you have a low context
    switching effort, this could be indicative that

    your code uses indirection, for example, to access
    field1 respectively field2, you need to perform
    ctx.data.field1 respectively ctx.data.field2:

    class Context {
    Data data;
    }

    class Data {
    int field1;
    int field2;
    }

    No indirection needed here, you can access
    ctx.field1 and ctx.field2:

    class Context {
    int field1;
    int field2;
    }

    Now you can make a very simple calculation.
    Assume you have auto-yield, which needs a context
    switch 60-times per second. And the cost is 1 in the

    indirect Context and the cost is 2 in the direct Context,
    since you have to copy two fields. Further assume
    that between the 60-times auto yield you access

    the fields 1000 times per second, and the indirect access
    has cost 2 and the direct access has cost 1. The shorter
    context is more costly:

    Indirect Shorter Context:
    Cost = 60*1 + 1000*2 = 2060

    Direct Larger Context:
    Cost = 60*2 + 1000*1 = 1120

    So larger contexts are better for fibers.

    Mostowski Collapse schrieb am Donnerstag, 2. März 2023 um 20:39:16 UTC+1:
    No we don't want you to discuss fever from long-covid.
    Also your dietary customs, whether you eat certain
    carbohydrate isn't at stake. Its about these fibers:

    However, fibers use cooperative multitasking while
    threads use preemptive multitasking. https://en.wikipedia.org/wiki/Fiber_(computer_science)

    Try this little exercise:

    Engine 1 (Every Second):
    animates a clock in the GUI

    Engine 2 (Every Minute): calls async_process/4,
    fetches the actual ₿ bitcoin rate,
    and displays it

    Who is up to the challenge? Engine 2 is not allowed to
    block Engine 1. So some event based network I/O will be
    needed, which is usually not found in Prolog systems.

    For example SWI-Prolog has only:

    ws_receive(+WebSocket, -Message:dict)
    Receive the next message from WebSocket. https://www.swi-prolog.org/pldoc/man?section=websocket

    The above doesn’t look event based, there is no call-back option etc..

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Fri Mar 3 10:34:54 2023
    There is possibly a tipping point, when you have more than the
    auto-yield context switching. For example a web server, when
    requests arrive. But the 1'000 times access to the context could

    be much larger as well, it could be that you access the context much more often. I am thinking of the Prolog stacks, the trail and the choice points which are involved in everything a Prolog system does. When you

    have more context access this moves the tipping point further away.

    Mostowski Collapse schrieb am Freitag, 3. März 2023 um 19:17:10 UTC+1:
    Here is a paper with some fiber costs:

    Instructions Data to move (bytes)
    System_V_x86_x64 23 64
    MachO_arm64 28 176
    Win_x86_x64 69 352

    Fibers under the magnifying glass
    WG21 - Gor Nishanov - 2018 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1364r0.pdf

    BTW: I am currently designing a rather large Prolog context switch.
    Mostowski Collapse schrieb am Freitag, 3. März 2023 um 19:12:58 UTC+1:
    For fibers a higher context switching effort might
    be desirable. Because if you have a low context
    switching effort, this could be indicative that

    your code uses indirection, for example, to access
    field1 respectively field2, you need to perform
    ctx.data.field1 respectively ctx.data.field2:

    class Context {
    Data data;
    }

    class Data {
    int field1;
    int field2;
    }

    No indirection needed here, you can access
    ctx.field1 and ctx.field2:

    class Context {
    int field1;
    int field2;
    }

    Now you can make a very simple calculation.
    Assume you have auto-yield, which needs a context
    switch 60-times per second. And the cost is 1 in the

    indirect Context and the cost is 2 in the direct Context,
    since you have to copy two fields. Further assume
    that between the 60-times auto yield you access

    the fields 1000 times per second, and the indirect access
    has cost 2 and the direct access has cost 1. The shorter
    context is more costly:

    Indirect Shorter Context:
    Cost = 60*1 + 1000*2 = 2060

    Direct Larger Context:
    Cost = 60*2 + 1000*1 = 1120

    So larger contexts are better for fibers.
    Mostowski Collapse schrieb am Donnerstag, 2. März 2023 um 20:39:16 UTC+1:
    No we don't want you to discuss fever from long-covid.
    Also your dietary customs, whether you eat certain
    carbohydrate isn't at stake. Its about these fibers:

    However, fibers use cooperative multitasking while
    threads use preemptive multitasking. https://en.wikipedia.org/wiki/Fiber_(computer_science)

    Try this little exercise:

    Engine 1 (Every Second):
    animates a clock in the GUI

    Engine 2 (Every Minute): calls async_process/4,
    fetches the actual ₿ bitcoin rate,
    and displays it

    Who is up to the challenge? Engine 2 is not allowed to
    block Engine 1. So some event based network I/O will be
    needed, which is usually not found in Prolog systems.

    For example SWI-Prolog has only:

    ws_receive(+WebSocket, -Message:dict)
    Receive the next message from WebSocket. https://www.swi-prolog.org/pldoc/man?section=websocket

    The above doesn’t look event based, there is no call-back option etc..

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Fri Mar 3 10:17:08 2023
    Here is a paper with some fiber costs:

    Instructions Data to move (bytes)
    System_V_x86_x64 23 64
    MachO_arm64 28 176
    Win_x86_x64 69 352

    Fibers under the magnifying glass
    WG21 - Gor Nishanov - 2018 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1364r0.pdf

    BTW: I am currently designing a rather large Prolog context switch.

    Mostowski Collapse schrieb am Freitag, 3. März 2023 um 19:12:58 UTC+1:
    For fibers a higher context switching effort might
    be desirable. Because if you have a low context
    switching effort, this could be indicative that

    your code uses indirection, for example, to access
    field1 respectively field2, you need to perform
    ctx.data.field1 respectively ctx.data.field2:

    class Context {
    Data data;
    }

    class Data {
    int field1;
    int field2;
    }

    No indirection needed here, you can access
    ctx.field1 and ctx.field2:

    class Context {
    int field1;
    int field2;
    }

    Now you can make a very simple calculation.
    Assume you have auto-yield, which needs a context
    switch 60-times per second. And the cost is 1 in the

    indirect Context and the cost is 2 in the direct Context,
    since you have to copy two fields. Further assume
    that between the 60-times auto yield you access

    the fields 1000 times per second, and the indirect access
    has cost 2 and the direct access has cost 1. The shorter
    context is more costly:

    Indirect Shorter Context:
    Cost = 60*1 + 1000*2 = 2060

    Direct Larger Context:
    Cost = 60*2 + 1000*1 = 1120

    So larger contexts are better for fibers.
    Mostowski Collapse schrieb am Donnerstag, 2. März 2023 um 20:39:16 UTC+1:
    No we don't want you to discuss fever from long-covid.
    Also your dietary customs, whether you eat certain
    carbohydrate isn't at stake. Its about these fibers:

    However, fibers use cooperative multitasking while
    threads use preemptive multitasking. https://en.wikipedia.org/wiki/Fiber_(computer_science)

    Try this little exercise:

    Engine 1 (Every Second):
    animates a clock in the GUI

    Engine 2 (Every Minute): calls async_process/4,
    fetches the actual ₿ bitcoin rate,
    and displays it

    Who is up to the challenge? Engine 2 is not allowed to
    block Engine 1. So some event based network I/O will be
    needed, which is usually not found in Prolog systems.

    For example SWI-Prolog has only:

    ws_receive(+WebSocket, -Message:dict)
    Receive the next message from WebSocket. https://www.swi-prolog.org/pldoc/man?section=websocket

    The above doesn’t look event based, there is no call-back option etc..

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to All on Sat Mar 4 05:09:54 2023
    Well I must apologize. I was using the word fiber, especially
    for a worker with event loop. But under more closer inspection
    of the notion, a worker with event loop would support both
    fibers and non-fibers, if we take this classification:

    +--- subroutine
    +--- coroutine
    ......+---- stackless
    ......+---- stackful (fibers)

    In JavaScript the notion is a little bit blurred, since one can use setTimeout() both with stackful coroutines and stackless
    coroutines. i.e. you can call setTimeout() with async function
    which gives you a stackful coroutine, you need to call setTimeout()
    only once. Or you can call setTimeout() with an ordinary function
    which gives you a stackless coroutine, if you do this over and over again.

    In Python the distinction is clearer. For example call_later() from
    the event loop gives you a stackless coroutine, by the same method
    of calling it over and over again, when the callback terminates. i.e.
    using continuation style re-scheduling. Or there is create_task()
    from the event loop, which then gives you a stackfull coroutine.

    So I doubt that anything what was said so far about fibers makes any
    sense. Especially since the above model deals with 1:N coroutines.
    Although Python has functions to inject a coroutine into an alien
    thread. But in the 1:N coroutines model of a worker with an event loop
    one has to look at different costs, costs for stackless
    and costs for stackfull coroutines.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Sat Mar 4 05:12:21 2023
    I don’t know how this plays out for SWI-Prolog. The testing
    reported here by Jan W doesn’t test coroutines. It tests only
    threads, increases the number of threads more and more.
    Threads are not coroutines. You would need to provide some
    special coroutine testing. And all you would possibly see that
    there is no speed-up only that the cost goes up.

    If the coroutine runs a Prolog engine, the costs for stackless and
    the cost for stackfull can be also different. In stackless you don’t
    need to create a new Prolog engine, you can reuse the existing
    main Prolog engine. In stackfull you have some initial cost of
    creating the Prolog engine and then you have the additional cost
    of appropriately resuming the Prolog engine, besides the cost of
    the event loop itself, the context switching there.

    Unless you have a test case that has also some network I/O or
    similar involved, i.e. processing done outside of the single thread
    where the coroutine runs, you wont see much benefit of coroutines.
    Not a benefit compared to threads, but a benefit of the coroutines
    itself. Depending on how the network I/O scales, the coroutines will scale.

    Another case of additional CPU resources would be for example
    handing a bitmap to a GPU. Or even handing some machine learning
    to a more versatile GPU. Or as microsoft recently does, it off-loads
    some anti-virus checking code to a GPU. Coroutines can equally well
    wait for results as threads can do, and coroutines do not need to
    run in separate threads.

    Mostowski Collapse schrieb am Samstag, 4. März 2023 um 14:09:55 UTC+1:
    Well I must apologize. I was using the word fiber, especially
    for a worker with event loop. But under more closer inspection
    of the notion, a worker with event loop would support both
    fibers and non-fibers, if we take this classification:

    +--- subroutine
    +--- coroutine
    ......+---- stackless
    ......+---- stackful (fibers)

    In JavaScript the notion is a little bit blurred, since one can use setTimeout() both with stackful coroutines and stackless
    coroutines. i.e. you can call setTimeout() with async function
    which gives you a stackful coroutine, you need to call setTimeout()
    only once. Or you can call setTimeout() with an ordinary function
    which gives you a stackless coroutine, if you do this over and over again.

    In Python the distinction is clearer. For example call_later() from
    the event loop gives you a stackless coroutine, by the same method
    of calling it over and over again, when the callback terminates. i.e.
    using continuation style re-scheduling. Or there is create_task()
    from the event loop, which then gives you a stackfull coroutine.

    So I doubt that anything what was said so far about fibers makes any
    sense. Especially since the above model deals with 1:N coroutines.
    Although Python has functions to inject a coroutine into an alien
    thread. But in the 1:N coroutines model of a worker with an event loop
    one has to look at different costs, costs for stackless
    and costs for stackfull coroutines.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Sat Mar 4 13:27:16 2023
    Thread switching, coroutine switching, engine switching,
    what is the cost. Here an attempt at some measurment.

    System fib/2 fib_hell/2 Ratio
    Jekejeke 365 541 148%
    Scryer 383 674 175%
    Dogelog 1680 4717 281%
    SWI 193 2823 1463%
    Trealla 716 63235 8832%

    Test case:

    fib(0, R) :- !, R=1.
    fib(1, R) :- !, R=1.
    fib(N, R) :- M is N-1, fib(M, A), L is M-1, fib(L, B), R is A+B.

    fib_hell(0, R) :- !, R=1, sleep(0).
    fib_hell(1, R) :- !, R=1, sleep(0).
    fib_hell(N, R) :- M is N-1, fib_hell(M, A), L is M-1, fib_hell(L, B), R is A+B.

    Mostowski Collapse schrieb am Samstag, 4. März 2023 um 14:12:22 UTC+1:
    I don’t know how this plays out for SWI-Prolog. The testing
    reported here by Jan W doesn’t test coroutines. It tests only
    threads, increases the number of threads more and more.
    Threads are not coroutines. You would need to provide some
    special coroutine testing. And all you would possibly see that
    there is no speed-up only that the cost goes up.

    If the coroutine runs a Prolog engine, the costs for stackless and
    the cost for stackfull can be also different. In stackless you don’t
    need to create a new Prolog engine, you can reuse the existing
    main Prolog engine. In stackfull you have some initial cost of
    creating the Prolog engine and then you have the additional cost
    of appropriately resuming the Prolog engine, besides the cost of
    the event loop itself, the context switching there.

    Unless you have a test case that has also some network I/O or
    similar involved, i.e. processing done outside of the single thread
    where the coroutine runs, you wont see much benefit of coroutines.
    Not a benefit compared to threads, but a benefit of the coroutines
    itself. Depending on how the network I/O scales, the coroutines will scale.

    Another case of additional CPU resources would be for example
    handing a bitmap to a GPU. Or even handing some machine learning
    to a more versatile GPU. Or as microsoft recently does, it off-loads
    some anti-virus checking code to a GPU. Coroutines can equally well
    wait for results as threads can do, and coroutines do not need to
    run in separate threads.
    Mostowski Collapse schrieb am Samstag, 4. März 2023 um 14:09:55 UTC+1:
    Well I must apologize. I was using the word fiber, especially
    for a worker with event loop. But under more closer inspection
    of the notion, a worker with event loop would support both
    fibers and non-fibers, if we take this classification:

    +--- subroutine
    +--- coroutine
    ......+---- stackless
    ......+---- stackful (fibers)

    In JavaScript the notion is a little bit blurred, since one can use setTimeout() both with stackful coroutines and stackless
    coroutines. i.e. you can call setTimeout() with async function
    which gives you a stackful coroutine, you need to call setTimeout()
    only once. Or you can call setTimeout() with an ordinary function
    which gives you a stackless coroutine, if you do this over and over again.

    In Python the distinction is clearer. For example call_later() from
    the event loop gives you a stackless coroutine, by the same method
    of calling it over and over again, when the callback terminates. i.e. using continuation style re-scheduling. Or there is create_task()
    from the event loop, which then gives you a stackfull coroutine.

    So I doubt that anything what was said so far about fibers makes any sense. Especially since the above model deals with 1:N coroutines. Although Python has functions to inject a coroutine into an alien
    thread. But in the 1:N coroutines model of a worker with an event loop
    one has to look at different costs, costs for stackless
    and costs for stackfull coroutines.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to All on Sun Mar 5 04:48:32 2023
    I now came up with the following API for event loop running
    in one thread, providing both 1:N fibers and non-fibers:

    - Part 1: Callbacks (non-fibers)
    They are Stackless and run in the main Engine of the Current Thread.
    In my current take, they run without Auto-Yield, this is switched off, so
    a Prolog flag heartbeat is needed which has a engine scope.

    os_call_later(G, D, T):
    The predicate succeeds in T with a new timer. As a side effect
    it schedules the goal G to be executed after D milliseconds.

    os_call_cancel(T):
    The predicate succeeds. As a side effect it cancels the timer T.

    - Part 2: Tasks (1:N fibers)
    They are Stackful and create their own Engine in the Current Thread.
    In my current take, they run with Auto-Yield, this is switched on, so
    a Prolog flag heartbeat is needed which has a engine scope.

    os_engine_current(E):
    The predicate succeeds in E with the current engine.

    os_engine_abort(E, M):
    The predicate succeeds. As a side effect the engine E gets
    the message M signalled.

    os_engine_new(G, E):
    The predicate succeeds in E with a new engine for the goal G.

    os_engine_start(E):
    The predicate succeeds. As a side effect the engine E gets
    scheduled to be executed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Sun Mar 5 04:50:23 2023
    I am currently working on a proof of concept, to use the
    above elements to realize this high-level predicate, but
    not with threads, but with auto-yielding coroutines:

    first_solution(-X, :Goals, +Options) https://www.swi-prolog.org/pldoc/man?predicate=first_solution/3

    What are the chances that this is possible? It would be
    very useful, since one could run multiple theorem provers
    in the browser, just like the thread version says, but now

    doing it with corountines. This could work for Prolog WASM
    in browser and nodeJS as well. The SWI-Prolog implementation
    uses queues. I have no clue yet what the replacement

    with coroutines would be.

    Mostowski Collapse schrieb am Sonntag, 5. März 2023 um 13:48:34 UTC+1:
    I now came up with the following API for event loop running
    in one thread, providing both 1:N fibers and non-fibers:

    - Part 1: Callbacks (non-fibers)
    They are Stackless and run in the main Engine of the Current Thread.
    In my current take, they run without Auto-Yield, this is switched off, so
    a Prolog flag heartbeat is needed which has a engine scope.

    os_call_later(G, D, T):
    The predicate succeeds in T with a new timer. As a side effect
    it schedules the goal G to be executed after D milliseconds.

    os_call_cancel(T):
    The predicate succeeds. As a side effect it cancels the timer T.

    - Part 2: Tasks (1:N fibers)
    They are Stackful and create their own Engine in the Current Thread.
    In my current take, they run with Auto-Yield, this is switched on, so
    a Prolog flag heartbeat is needed which has a engine scope.

    os_engine_current(E):
    The predicate succeeds in E with the current engine.

    os_engine_abort(E, M):
    The predicate succeeds. As a side effect the engine E gets
    the message M signalled.

    os_engine_new(G, E):
    The predicate succeeds in E with a new engine for the goal G.

    os_engine_start(E):
    The predicate succeeds. As a side effect the engine E gets
    scheduled to be executed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Sun Mar 5 04:52:31 2023
    As a pretty serious use-case, one could for example very easily
    code a tool such as Wolfgang Schwartz tree tool, that runs in
    parallel a model finder and a theorem finder:

    **Tree Proof Generator** [https://www.umsu.de/trees/](https://www.umsu.de/trees/)

    The allotment of time to the two finders would be according to the auto-yielding of both coroutines. Alternatively, if auto-yielding and the requirement of a engine local heartbeat is too complicated, the

    to finders could also call sleep(0) at dedicated places. But what is seen
    in the fib_hell/2 example, it might have more overhead than a heartbeat.
    It depends, how costly a heartbeat itself is.

    Mostowski Collapse schrieb am Sonntag, 5. März 2023 um 13:50:24 UTC+1:
    I am currently working on a proof of concept, to use the
    above elements to realize this high-level predicate, but
    not with threads, but with auto-yielding coroutines:

    first_solution(-X, :Goals, +Options) https://www.swi-prolog.org/pldoc/man?predicate=first_solution/3

    What are the chances that this is possible? It would be
    very useful, since one could run multiple theorem provers
    in the browser, just like the thread version says, but now

    doing it with corountines. This could work for Prolog WASM
    in browser and nodeJS as well. The SWI-Prolog implementation
    uses queues. I have no clue yet what the replacement

    with coroutines would be.
    Mostowski Collapse schrieb am Sonntag, 5. März 2023 um 13:48:34 UTC+1:
    I now came up with the following API for event loop running
    in one thread, providing both 1:N fibers and non-fibers:

    - Part 1: Callbacks (non-fibers)
    They are Stackless and run in the main Engine of the Current Thread.
    In my current take, they run without Auto-Yield, this is switched off, so a Prolog flag heartbeat is needed which has a engine scope.

    os_call_later(G, D, T):
    The predicate succeeds in T with a new timer. As a side effect
    it schedules the goal G to be executed after D milliseconds.

    os_call_cancel(T):
    The predicate succeeds. As a side effect it cancels the timer T.

    - Part 2: Tasks (1:N fibers)
    They are Stackful and create their own Engine in the Current Thread.
    In my current take, they run with Auto-Yield, this is switched on, so
    a Prolog flag heartbeat is needed which has a engine scope.

    os_engine_current(E):
    The predicate succeeds in E with the current engine.

    os_engine_abort(E, M):
    The predicate succeeds. As a side effect the engine E gets
    the message M signalled.

    os_engine_new(G, E):
    The predicate succeeds in E with a new engine for the goal G.

    os_engine_start(E):
    The predicate succeeds. As a side effect the engine E gets
    scheduled to be executed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to All on Thu Mar 9 09:44:26 2023
    Ok here is a first solution for the challenge:

    Try this little exercise:

    Engine 1 (Every Second):
    animates a clock in the GUI

    Engine 2 (Every Minute): calls async_process/4,
    fetches the actual ₿ bitcoin rate,
    and displays it

    tick(0) :- !.
    tick(N) :- M is N-1, write('tick '), flush_output,
    call_later(tick(M), 1000).
    tock(0) :- !.
    tock(N) :- M is N-1, write('tock '), flush_output,
    call_later(tock(M), 5000).

    It should display something along:

    ?- tick(11), tock(3), sleep(12000).
    tick tock tick tick tick tick tock tick tick tick tick tick tock tick true.

    Are there other solutions?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mostowski Collapse@21:1/5 to Mostowski Collapse on Thu Mar 9 10:29:52 2023
    The same works in Tau Prolog sandbox:

    :- use_module(library(os)).
    call_later(G, T) :-
    set_timeout(T, G, _).

    The predicate set_timeout/3 is also mentioned here:

    Web development with Tau Prolog
    José A. Riaza - June 13, 2022 https://biblioteca.sistedes.es/submissions/descargas/2022/PROLE/2022-PROLE-006.pdf

    Mostowski Collapse schrieb am Donnerstag, 9. März 2023 um 18:44:28 UTC+1:
    Ok here is a first solution for the challenge:
    Try this little exercise:

    Engine 1 (Every Second):
    animates a clock in the GUI

    Engine 2 (Every Minute): calls async_process/4,
    fetches the actual ₿ bitcoin rate,
    and displays it
    tick(0) :- !.
    tick(N) :- M is N-1, write('tick '), flush_output,
    call_later(tick(M), 1000).
    tock(0) :- !.
    tock(N) :- M is N-1, write('tock '), flush_output,
    call_later(tock(M), 5000).

    It should display something along:

    ?- tick(11), tock(3), sleep(12000).
    tick tock tick tick tick tick tock tick tick tick tick tick tock tick true.

    Are there other solutions?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mild Shock@21:1/5 to All on Sat Aug 19 09:45:58 2023
    Yeah, Corona is over, the 90's are back:

    Example 1: Techno Musik is Back
    (Remember Party Animals - Have You Ever Been Mellow)
    Domiziana feat. Blümchen - SOS
    https://www.youtube.com/watch?v=mlzA0R9kSTg

    Example 2: No-GIL Virtual Machines are Back
    (Remember Java Synced Vector and Hashtable 1995)
    This PEP proposes using per-object locks https://peps.python.org/pep-0703/#container-thread-safety

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mild Shock@21:1/5 to Mild Shock on Sat Aug 19 09:53:20 2023
    But what about the new SWI-Prolog Janus Python Interface?
    Runs even with Python 3.12 it seems:

    /* SWI-Prolog (threaded, 64 bits, version 9.1.14) */
    ?- assertz(file_search_path(path, '<your python runtime directory>')).
    true.
    ?- py_version.
    % Janus embeds Python 3.12.0b4
    true.

    But it doesn't provide a Prolog model on the Python side, this fails:

    ?- L = foo(bar), py_call(str(L), S).
    ERROR: Domain error: `py_data' expected, found `foo(bar)'

    So people from PySwip will not immediately flock to Janus?
    Thats a large list of people using PySwip, it says 329 people!

    https://github.com/yuce/pyswip

    PySwip seems to provide a Prolog model on the Python side,
    it has classes such as Functor, Variable, etc..
    But its pretty dead, the last commit was Jan 18, 2023

    Maybe there is a follow up project somewhere? Or
    they are holding back, it says 0.2.11 (Not Released)?

    Mild Shock schrieb am Samstag, 19. August 2023 um 18:46:00 UTC+2:
    Yeah, Corona is over, the 90's are back:

    Example 1: Techno Musik is Back
    (Remember Party Animals - Have You Ever Been Mellow)
    Domiziana feat. Blümchen - SOS
    https://www.youtube.com/watch?v=mlzA0R9kSTg

    Example 2: No-GIL Virtual Machines are Back
    (Remember Java Synced Vector and Hashtable 1995)
    This PEP proposes using per-object locks https://peps.python.org/pep-0703/#container-thread-safety

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mild Shock@21:1/5 to Mild Shock on Wed Aug 23 00:55:57 2023
    Woa! Now we got:

    🚨 This is not a drill! 🚨
    @Microsoft is bringing Python to Excel https://twitter.com/TechCrunch/status/1694037454020558904

    Well I guess via Dogelog Player we now have also
    Prolog in Excel. Will this violate this currious patent
    by XSB Prolog that will expire 2029-03-09:

    A user programmable deductive spreadsheet is implemented
    as an add-in to an existing mathematical spreadsheet program
    and allows the use of a logic programming language such as
    Prolog via a familiar spreadsheet interface 2029-03-09: https://patents.google.com/patent/US7761782B1/en

    LoL

    Mild Shock schrieb am Samstag, 19. August 2023 um 18:53:22 UTC+2:
    But what about the new SWI-Prolog Janus Python Interface?
    Runs even with Python 3.12 it seems:

    /* SWI-Prolog (threaded, 64 bits, version 9.1.14) */
    ?- assertz(file_search_path(path, '<your python runtime directory>')).
    true.
    ?- py_version.
    % Janus embeds Python 3.12.0b4
    true.

    But it doesn't provide a Prolog model on the Python side, this fails:

    ?- L = foo(bar), py_call(str(L), S).
    ERROR: Domain error: `py_data' expected, found `foo(bar)'

    So people from PySwip will not immediately flock to Janus?
    Thats a large list of people using PySwip, it says 329 people!

    https://github.com/yuce/pyswip

    PySwip seems to provide a Prolog model on the Python side,
    it has classes such as Functor, Variable, etc..
    But its pretty dead, the last commit was Jan 18, 2023

    Maybe there is a follow up project somewhere? Or
    they are holding back, it says 0.2.11 (Not Released)?
    Mild Shock schrieb am Samstag, 19. August 2023 um 18:46:00 UTC+2:
    Yeah, Corona is over, the 90's are back:

    Example 1: Techno Musik is Back
    (Remember Party Animals - Have You Ever Been Mellow)
    Domiziana feat. Blümchen - SOS https://www.youtube.com/watch?v=mlzA0R9kSTg

    Example 2: No-GIL Virtual Machines are Back
    (Remember Java Synced Vector and Hashtable 1995)
    This PEP proposes using per-object locks https://peps.python.org/pep-0703/#container-thread-safety

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)