Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
My revecent Async/Await surrogates for JDK 21
provide coroutines with suspend/resume semantics.
They are not continuations. They aim is to provide
async/await and not only setTimeout().
As a result you don't need to write libraries
with a continuation parameters. This is very unlike
nonsense such as the JavaScript express web framework.
stackfulness
In contrast to a stackless coroutine a stackful
coroutine can be suspended from within a nested
stackframe. Execution resumes at exactly the same
point in the code where it was suspended before.
stackless
With a stackless coroutine, only the top-level routine
may be suspended. Any routine called by that top-level
routine may not itself suspend. This prohibits
providing suspend/resume operations in routines within
a general-purpose library. https://www.boost.org/doc/libs/1_57_0/libs/coroutine/doc/html/coroutine/intro.html#coroutine.intro.stackfulness
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Its also proof of concept that no stack copying
is necessary. Well its not 100% true the Prolog
interpreter does a little bit unwind and rewind
during the '$YIELD'/1 instruction. But we do
nowhere copy some native stack, this is unlike
Martin Odersky's speculation, he might implement
someting with stack copying. Except that a virtual
threads might using a copying when they resize
their stack, I don't see any need for copying.
Also sometimes a callback can be piggy packed on
an existing coroutine if it doesn't yield itself,
I am already using this in Dogelog Player as an
optimization. The idea to use semaphores in my
implementation can be credited to this paper
from 1980 where semaphores are the main switchpoint:
Extension of Pascal and its Application to
Quasi-Parallel Programming and Simulation, Software -
Practice and Experience, 10 (1980), 773-789
J. Kriz and H. Sandmayr
https://www.academia.edu/47139332
But my experience with JDK 21 virtual threads
is still poor, I am only beginning to explore them
as a way to have a large number of coroutines.
Mild Shock schrieb:
My revecent Async/Await surrogates for JDK 21
provide coroutines with suspend/resume semantics.
They are not continuations. They aim is to provide
async/await and not only setTimeout().
As a result you don't need to write libraries
with a continuation parameters. This is very unlike
nonsense such as the JavaScript express web framework.
stackfulness
In contrast to a stackless coroutine a stackful
coroutine can be suspended from within a nested
stackframe. Execution resumes at exactly the same
point in the code where it was suspended before.
stackless
With a stackless coroutine, only the top-level routine
may be suspended. Any routine called by that top-level
routine may not itself suspend. This prohibits
providing suspend/resume operations in routines within
a general-purpose library.
https://www.boost.org/doc/libs/1_57_0/libs/coroutine/doc/html/coroutine/intro.html#coroutine.intro.stackfulness
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Can the golang interfaces handling be compared to lazy/sharing
in the non-strict semantics of Haskell? Just curious, maybe
there is nevertheless a chance to speed up Prolog call/n.
After all Haskell and Prolog are very similar they champion
non-strict features. Most novice Prolog programmers are
suprised by this behaviour, and that they need to invoke is/2,
making evaluation explicit, why is it not implicit like in every
other programming language (warning the example could
be misleading, its not what I am attacking, only motivation here):
?- X = 1+2.
X = 1+2
But then there are other more important corner where Prolog
and Haskell are basically the same, i.e. call/n.
The sharing in Haskell could then give semantic to monomorphic
and polymorphic caches. Did somebody write a paper about
golang interfaces handling, relating it to non-strict behaviour?
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
BTW: I retract my claim that Trealla Prolog
chokes on call/1. This is amazing. Try this program:
?- [user].
p(X) :- X = (Y is 1+2, _ is Y+3).
First SWI-Prolog:
/* SWI-Prolog */
?- p(X), time((between(1,1000000,_),call(X),fail; true)).
% 2,999,998 inferences, 0.516 CPU in 0.508 seconds (101% CPU, 5818178 Lips)
X = (_A is 1+2, _ is _A+3).
?- time((between(1,1000000,_),p(X),call(X),fail; true)).
% 3,999,998 inferences, 0.594 CPU in 0.580 seconds (102% CPU, 6736839 Lips)
Then Trealla Prolog:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% Time elapsed 0.244s, 4000003 Inferences, 16.363 MLips
X = (_A is 1+2,_B is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% Time elapsed 0.399s, 6000003 Inferences, 15.047 MLips
true.
Whats going on, why is it faster?
Mild Shock schrieb:
Motivation, it is now widespread assume that
Prolog has call/n. But testing shows that both
Scryer Prolog and Trealla Prolog chocke even on
call/1. But then we find this proposal which
includes maplist/n, foldl/4, etc..
A Prologue for Prolog - post-N290
https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue
But is there a Prolog technology that makes
call/n fast in the first place?
Mild Shock schrieb:
Can the golang interfaces handling be compared to lazy/sharing
in the non-strict semantics of Haskell? Just curious, maybe
there is nevertheless a chance to speed up Prolog call/n.
After all Haskell and Prolog are very similar they champion
non-strict features. Most novice Prolog programmers are
suprised by this behaviour, and that they need to invoke is/2,
making evaluation explicit, why is it not implicit like in every
other programming language (warning the example could
be misleading, its not what I am attacking, only motivation here):
?- X = 1+2.
X = 1+2
But then there are other more important corner where Prolog
and Haskell are basically the same, i.e. call/n.
The sharing in Haskell could then give semantic to monomorphic
and polymorphic caches. Did somebody write a paper about
golang interfaces handling, relating it to non-strict behaviour?
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Motivation, it is now widespread assume that
Prolog has call/n. But testing shows that both
Scryer Prolog and Trealla Prolog chocke even on
call/1. But then we find this proposal which
includes maplist/n, foldl/4, etc..
A Prologue for Prolog - post-N290 https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue
But is there a Prolog technology that makes
call/n fast in the first place?
Mild Shock schrieb:
Can the golang interfaces handling be compared to lazy/sharing
in the non-strict semantics of Haskell? Just curious, maybe
there is nevertheless a chance to speed up Prolog call/n.
After all Haskell and Prolog are very similar they champion
non-strict features. Most novice Prolog programmers are
suprised by this behaviour, and that they need to invoke is/2,
making evaluation explicit, why is it not implicit like in every
other programming language (warning the example could
be misleading, its not what I am attacking, only motivation here):
?- X = 1+2.
X = 1+2
But then there are other more important corner where Prolog
and Haskell are basically the same, i.e. call/n.
The sharing in Haskell could then give semantic to monomorphic
and polymorphic caches. Did somebody write a paper about
golang interfaces handling, relating it to non-strict behaviour?
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
I can make the first test case faster in SWI-Prolog
by removing the call/1. but it doesn't have an impact
on the second test case:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% 2,999,998 inferences, 0.172 CPU in 0.183 seconds (94% CPU, 17454534 Lips)
X = (_A is 1+2, _ is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% 3,999,998 inferences, 0.594 CPU in 0.597 seconds (99% CPU, 6736839 Lips) true.
Now I am testing the same for SWI-Prolog and Trealla
Prolog. SWI-Prolog has it an itch faster in the first
test case, but an itch slower in the second test case.
Mild Shock schrieb:
BTW: I retract my claim that Trealla Prolog
chokes on call/1. This is amazing. Try this program:
?- [user].
p(X) :- X = (Y is 1+2, _ is Y+3).
First SWI-Prolog:
/* SWI-Prolog */
?- p(X), time((between(1,1000000,_),call(X),fail; true)).
% 2,999,998 inferences, 0.516 CPU in 0.508 seconds (101% CPU, 5818178
Lips)
X = (_A is 1+2, _ is _A+3).
?- time((between(1,1000000,_),p(X),call(X),fail; true)).
% 3,999,998 inferences, 0.594 CPU in 0.580 seconds (102% CPU, 6736839
Lips)
Then Trealla Prolog:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% Time elapsed 0.244s, 4000003 Inferences, 16.363 MLips
X = (_A is 1+2,_B is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% Time elapsed 0.399s, 6000003 Inferences, 15.047 MLips
true.
Whats going on, why is it faster?
Mild Shock schrieb:
Motivation, it is now widespread assume that
Prolog has call/n. But testing shows that both
Scryer Prolog and Trealla Prolog chocke even on
call/1. But then we find this proposal which
includes maplist/n, foldl/4, etc..
A Prologue for Prolog - post-N290
https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue
But is there a Prolog technology that makes
call/n fast in the first place?
Mild Shock schrieb:
Can the golang interfaces handling be compared to lazy/sharing
in the non-strict semantics of Haskell? Just curious, maybe
there is nevertheless a chance to speed up Prolog call/n.
After all Haskell and Prolog are very similar they champion
non-strict features. Most novice Prolog programmers are
suprised by this behaviour, and that they need to invoke is/2,
making evaluation explicit, why is it not implicit like in every
other programming language (warning the example could
be misleading, its not what I am attacking, only motivation here):
?- X = 1+2.
X = 1+2
But then there are other more important corner where Prolog
and Haskell are basically the same, i.e. call/n.
The sharing in Haskell could then give semantic to monomorphic
and polymorphic caches. Did somebody write a paper about
golang interfaces handling, relating it to non-strict behaviour?
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
But I cannot retract my claim about Scryer Prolog
it definitively chokes:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% CPU time: 2.233s, 11_000_023 inferences
X = (_A is 1+2,_B is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% CPU time: 6.180s, 13_000_045 inferences
true.
Thats an order slower than Trealla and SWI-Prolog.
A small sanity test. How does formerly Jekejeke Prolog
and Dogelog Player perform. Dogelog Player has not yet
call/N, I do not promote using it, since I am
still waiting for a good idea to compile it a little
bit more statically. Formerly Jekejeke Prolog uses a
dynamic inline cache, polymorphic in the arity.
But this test case is anyway less about call/N and
more about call/1 and is/2. I get:
For formerly Jekejeke Prolog:
/* Jekejeke Prolog 1.6.6 */
?- p(X), time((between(1,1000000,_),X,fail; true)).
% Zeit 540 ms, GC 3 ms, Uhr 01.03.2024 19:37
X = (_0 is 1+2, _1 is _0+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% Zeit 867 ms, GC 3 ms, Uhr 01.03.2024 19:37
true.
For Dogelog Player:
/* Dogelog Player 1.1.6 */
?- p(X), time((between(1,1000000,_),X,fail; true)).
% Zeit 447 ms, GC 0 ms, Lips 15660199, Uhr 01.03.2024 19:38
X = (_16030782 is 1+2, _16030783 is _16030782+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% Zeit 1780 ms, GC 0 ms, Lips 14606802, Uhr 01.03.2024 19:38
true.
Jekejeke Prolog and Dogelog Player do not choke
like Scryer Prolog does. The performance of Dogelog
Player is amazing, since call/1 is implemented in
pure Prolog, not via an interpreter, but via a little
compiler, and an intermediate form, that we anyway use
when we generate cross compiled code or
static/dynamic clausse.
Mild Shock schrieb:
But I cannot retract my claim about Scryer Prolog
it definitively chokes:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% CPU time: 2.233s, 11_000_023 inferences
X = (_A is 1+2,_B is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% CPU time: 6.180s, 13_000_045 inferences
true.
Thats an order slower than Trealla and SWI-Prolog.
Lets not forget GNU Prolog. It lacks a little bit
behind SWI-Prolog for the first test case:
?- p(X), (between(1,1000000,_),X,fail; true).
X = (A is 1+2,_ is A+3)
(453 ms) yes
?- (between(1,1000000,_),p(X),X,fail; true).
(578 ms) yes
Also this kind of testing might be a little bit
to generous, by ommiting the time/1 call.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 492 |
Nodes: | 16 (2 / 14) |
Uptime: | 147:44:29 |
Calls: | 9,695 |
Calls today: | 5 |
Files: | 13,732 |
Messages: | 6,178,674 |