Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Terminating intuitionistic calculus
Giulio Fellin and Sara Negri
https://philpapers.org/rec/FELATI
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
A few years ago I was impressed by
the output of either Negri or Plato,
or the two together.
Now they are just an annoyance, all
they show is that they are neither talented
nor have sufficient training.
Just have a look at:
Terminating intuitionistic calculus
Giulio Fellin and Sara Negri
https://philpapers.org/rec/FELATI
Beside the too obvious creative idea and motive
behind it, it is most likely complete useless
nonsense. Already this presentation in the
paper shows utter incompetence:
Γ, A → B ⊢ A Γ, A → B, B ⊢ Δ ----------------------------------------
Γ, A → B ⊢ Δ
Everybody in the business knows that the
looping, resulting from the A → B copying,
is a fact. But can be reduced since the
copying on the right hand side is not needed.
Γ, A → B ⊢ A Γ, B ⊢ Δ --------------------------------
Γ, A → B ⊢ Δ
The above variant is enough. Just like Dragalin
presented the calculus. I really wish people
would completely understand these master pieces,
before they even touch multi consequent calculi:
Mathematical Intuitionism: Introduction to Proof Theory
Albert Grigorevich Dragalin - 1988
https://www.amazon.com/dp/0821845209
Contraction-Free Sequent Calculi for Intuitionistic Logic
Roy Dyckhoff - 1992
http://www.cs.cmu.edu/~fp//courses/atp/cmuonly/D92.pdf
Whats the deeper semantic (sic!) explanation of the
two calculi GHPC and GCPC? I have a Kripke semantics
explanation in my notes, didn't release it yet.
Have Fun!
Mild Shock schrieb:
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Because of the vagueness of the notions of “constructive
proof”, “constructive operation”, the BHK-interpretation
has never become a versatile technical tool in the way
classical semantics has. Perhaps it is correct to say
that by most people the BHK-interpretation has never been
seen as an intuitionistic counterpart to classical semantics. https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
The meteoric rise of Curry-Howard isomorphism
and minimal logic, possibly because proof assistants
such as Lean, Agda, etc… all use it, is quite ironic,
in the light of this statement:
Because of the vagueness of the notions of “constructive
proof”, “constructive operation”, the BHK-interpretation
has never become a versatile technical tool in the way
classical semantics has. Perhaps it is correct to say
that by most people the BHK-interpretation has never been
seen as an intuitionistic counterpart to classical semantics. https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf
Mild Shock schrieb:
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
There are possibly issues of interdisciplinary
work. For example Sorensen & Urzyczyn in their
Lectures on the Curry-Howard Isomorphism say that
the logic LP has no name in literature.
On the other hand Segerbergs paper, shows that
a logic LP, in his labeling JP, that stems from
accepting Peice's Law is equivalent to a logic
accepting Curry's Refutation rule,
i.e the logic JE with:
Γ, A => B |- A
-----------------
Γ |- A
But the logic JE also implies that LEM was added!
Bye
Mild Shock schrieb:
The meteoric rise of Curry-Howard isomorphism
and minimal logic, possibly because proof assistants
such as Lean, Agda, etc… all use it, is quite ironic,
in the light of this statement:
Because of the vagueness of the notions of “constructiveMild Shock schrieb:
proof”, “constructive operation”, the BHK-interpretation
has never become a versatile technical tool in the way
classical semantics has. Perhaps it is correct to say
that by most people the BHK-interpretation has never been
seen as an intuitionistic counterpart to classical semantics.
https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf >>
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Plato (p. 83 of Elements of Logical
Reasoning) … excellent book
Hi,
I am not halucinating that Negri is nonsense:
This calculus does not terminate (e.g. on Peirce’s
formula). Negri [42] shows how to add a loop-checking
mechanism to ensure termination. The effect on complexity
isn’t yet clear; but the loop-checking is expensive.
Intuitionistic Decision Procedures since Gentzen
The Jägerfest - 2013
https://apt13.unibe.ch/slides/Dyckhoff.pdf
Bye
Sequent calculus offers a good possibility for
exhaustive proof search in propositional logic:
We can check through all the possibilities for
malking a derivation. If none of them worked,
i.e., if each had at least one branch in which
no rule applied and no initial sequent was reached,
the given sequent is underivable. The
symbol |/-, is used for underivability.
The premisses are simpler than the condusion
in all the rules except possibly in the left
premiss of rule L=>. That is the only source
of non-termination. Rules other than L=> can
produce duplication, if an active formula had
another occurrence in the antecedent. This
source of duplication comes to an end.
The sad news is, the book is only
worth some fire wood.
Plato (p. 83 of Elements of Logical Reasoning
Interestingly the book uses non-classical
logic, since it says:
Sequent calculus offers a good possibility for
exhaustive proof search in propositional logic:
We can check through all the possibilities for
malking a derivation. If none of them worked,
i.e., if each had at least one branch in which
no rule applied and no initial sequent was reached,
the given sequent is underivable. The
symbol |/-, is used for underivability.
And then it has unprovable:
c. |/- A v ~A
d. |/- ~~A => A
But mostlikely the book has a blind spot, some
serious errors, or totally unfounded claims, since
for example with such a calculus, the unprovability
of Peirce’s Law cannot be shown so easily.
Exhaustive proof search will usually not terminate.
There are some terminating calculi, like Dyckhoffs
LJT, but a naive calculus based on Gentzens take
will not terminate.
The single-succedent sequent calculus of proof
search of Table 4.1 is a relatively recent invention:
Building on the work of Albert Dragalin (1978) on the
invertibility of logical rules in sequent calculi,
Anne Troelstra worked out the details of the proof
theory of this `contraction-free' calculus in the
book Basic Proof Theorv (2000).
Propositional Dynamic Logic of Regular Programs
Fischer & Ladner - 1979 https://www.sciencedirect.com/science/article/pii/0022000079900461
The modal systems K, T, S4, S5 (cf. Ladner [16]) are
recognizable subsystems of propositional dynamic logic.
K allows only the modality A,
T allows only the modality A u λ,
S4 allows ordy the modality A*,
S5 allows only the modality (A u A-)*.
Rather read the original, von Plato
takes his wisdom from:
The single-succedent sequent calculus of proof
search of Table 4.1 is a relatively recent invention:
Building on the work of Albert Dragalin (1978) on the
invertibility of logical rules in sequent calculi,
Anne Troelstra worked out the details of the proof
theory of this `contraction-free' calculus in the
book Basic Proof Theorv (2000).
But the book by Troelstra (1939-2019) and
Schwichtenberg (1949 -), doesn’t contain a minimal
logic is decidable theorem, based on some “loop
checking”, as indicated by von Plato on page 78.
The problem situation is similar as in Prolog SLD
resolution, where S stands for selection function.
Since the (L=>) inference rule is not invertible, it
involves a selection function σ,
that picks the active formula:
Γ, A => B |- A Γ, B |- C A selection function σ did pick
------------------------------- (L=>) A => B from the left hand side
Γ, A => B |- C
One selection function might loop, another
selection function might not loop. In Jens Otten
ileansep.p through backtracking over the predicate
select/3 and iterative deepening all selections
are tried. To show unprovability you have to show
looping for all possible selection functions, which
is obviously less trivial than the “root-first proof
search” humbug from von Platos vegan products
store that offers “naturally growing trees”.
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
It is interesting to note that almost all the major subfields of AImirror subfields of philosophy: The AI analogue of philosophy of
linguistics; what philosophers call “practical
reasoning” is called “planning and acting” in
AI; ontology (indeed, much of metaphysics
and epistemology) corresponds to knowledge
representation in AI; and automated reasoning
is one of the AI analogues of logic.
– C.2.1.1 Intentions, practitions, and the ought-to-do.
Should AI workers study philosophy? Yes,
unless they are content to reinvent the wheel
every few days. When AI reinvents a wheel, it is
typically square, or at best hexagonal, and
can only make a few hundred revolutions before
it stops. Philosopher’s wheels, on the other hand,
are perfect circles, require in principle no
lubrication, and can go in at least two directions
at once. Clearly a meeting of minds is in order.
– C.4 Summary
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
The cognitive revolution was an intellectualhttps://en.wikipedia.org/wiki/Cognitive_revolution
movement that began in the 1950s as an
interdisciplinary study of the mind and its
processes, from which emerged a new
field known as cognitive science.
You are surprised; I am saddened. Not only havewe lost contact with the primary studies of knowledge
Hi,
Yes, maybe we are just before a kind
of 2nd Cognitive Turn. The first Cognitive
Turn is characterized as:
The cognitive revolution was an intellectualhttps://en.wikipedia.org/wiki/Cognitive_revolution
movement that began in the 1950s as an
interdisciplinary study of the mind and its
processes, from which emerged a new
field known as cognitive science.
The current mainstream believe is that
Chat Bots and the progress in AI is mainly
based on "Machine Learning", whereas
most of the progress is more based on
"Deep Learning". But I am also sceptical
about "Deep Learning" in the end a frequentist
is again lurking. In the worst case the
no Bayension Brain shock will come with a
Technological singularity in that the current
short inferencing of LLMs is enhanced by
some long inferencing, like here:
A week ago, I posted that I was cooking a
logical reasoning benchmark as a side project.
Now it's finally ready! Introducing 🦓 𝙕𝙚𝙗𝙧𝙖𝙇𝙤𝙜𝙞𝙘,
designed for evaluating LLMs with Logic Puzzles. https://x.com/billyuchenlin/status/1814254565128335705
making it possible not to excell by LLMs
in such puzzles, but to advance to more
elaborate scientific models, that can somehow
overcome fallacies such as:
- Kochen Specker Paradox, some fallacies
caused by averaging?
- Gluts and Gaps in Bayesian Reasoning,
some fallacies by consistency assumptions?
- What else?
So on quiet paws AI might become the new overlord
of science which we will happily depend on.
Jeff Barnett schrieb:
You are surprised; I am saddened. Not only havewe lost contact with the primary studies of knowledge
and reasoning, we have also lost contact with the
studies of methods and motivation. Psychology
was the basic home room of Alan Newell and many
other AI all stars. What is now called AI, I think
incorrectly, is just ways of exercising large amounts
of very cheap computer power to calculate approximates
to correlations and other statistical approximations.
The problem with all of this in my mind, is that we
learn nothing about the capturing of knowledge, what
it is, or how it is used. Both logic and heuristic reasoning
are needed and we certainly believe that intelligence is
not measured by its ability to discover "truth" or its
infallibly consistent results. Newton's thought process
was pure genius but known to produce fallacious results
when you know what Einstein knew at a later time.
I remember reading Ted Shortliffe's dissertation about
MYCIN (an early AI medical consultant for diagnosing
blood-borne infectious diseases) where I learned about
one use of the term "staff disease", or just "staff" for short.
In patient care areas there always seems to be an in-
house infection that changes over time. It changes
because sick patients brought into the area contribute
whatever is making them sick in the first place. In the
second place there is rapid mutations driven by all sorts
of factors present in hospital-like environments. The
result is that the local staff is varying, literally, minute
by minute. In a days time, the samples you took are
no longer valid, i.e., their day old cultures may be
meaningless. The underlying mathematical problem is
that probability theory doesn't really have the tools to
make predictions when the basic probabilities are
changing faster than observations can be
turned into inferences.
Why do I mention the problems of unstable probabilities
here? Because new AI uses fancy ideas of correlation
to simulate probabilistic inference, e.g., Bayesian inference.
Since actual probabilities may not exist in any meaningful
ways, the simulations are often based on air.
A hallmark of excellent human reasoning is the ability to
explain how we arrived at our conclusions. We are also
able to repair our inner models when we are in error if
we can understand why. The abilities to explain and
repair are fundamental to excellence of thought processes.
By the way, I'm not claiming that all humans or I have theses
reflective abilities. Those who do are few and far between.
However, any AI that doesn't have some of these
capabilities isn't very interesting.
For more on reasons why logic and truth are only part of human
ability to reasonably reason, see
https://www.yahoo.com/news/opinion-want-convince-conspiracy-theory-100258277.html
-- Jeff Barnett
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Well we all know about this rule:
- Never ask a woman about her weight
- Never ask a woman about her age
There is a similar rule for philosophers:
- Never ask a philosopher what is cognitive science
- Never ask a philosopher what is formula-as-types
Explanation: They like to be the champions of
pure form like in this paper below, so they
don’t like other disciplines dealing with pure
form or even having pure form on the computer.
"Pure” logic, ontology, and phenomenology
David Woodruff Smith - Revue internationale de philosophie 2003/2
Mild Shock schrieb:
There are more and more papers of this sort:
Reliable Reasoning Beyond Natural Language
To address this, we propose a neurosymbolic
approach that prompts LLMs to extract and encode
all relevant information from a problem statement as
logical code statements, and then use a logic programming
language (Prolog) to conduct the iterative computations of
explicit deductive reasoning.
[2407.11373] Reliable Reasoning Beyond Natural Language
The future of Prolog is bright?
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
Lets say one milestone in cognitive science,
is the concept of "bounded rationality".
It seems LLMs have some traits that are also
found in humans. For example the anchoring effect
is a psychological phenomenon in which an
individual’s judgements or decisions
are influenced by a reference point or “anchor”
which can be completely irrelevant. Like for example
when discussing Curry Howard isomorphism with
a real world philosopher , one that might
not know Curry Howard isomorphism but
https://en.wikipedia.org/wiki/Anchoring_effect
nevertheless be tempted to hallucinate some nonsense.
One highly cited paper in this respect is Tversky &
Kahneman 1974. R.I.P. Daniel Kahneman,
March 27, 2024. The paper is still cited today:
Artificial Intelligence and Cognitive Biases: A Viewpoint https://www.cairn.info/revue-journal-of-innovation-economics-2024-2-page-223.htm
Maybe using deeper and/or more careful reasoning,
possibly backed up by Prolog engine, could have
a positive effect? Its very difficult also for a
Prolog engine, since there is a trade-off
between producing no answer at all if the software
agent is too careful, and of producing a wealth
of nonsense otherwise.
Bye
Mild Shock schrieb:
Well we all know about this rule:
- Never ask a woman about her weight
- Never ask a woman about her age
There is a similar rule for philosophers:
- Never ask a philosopher what is cognitive science
- Never ask a philosopher what is formula-as-types
Explanation: They like to be the champions of
pure form like in this paper below, so they
don’t like other disciplines dealing with pure
form or even having pure form on the computer.
"Pure” logic, ontology, and phenomenology
David Woodruff Smith - Revue internationale de philosophie 2003/2
https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm
Mild Shock schrieb:
There are more and more papers of this sort:
Reliable Reasoning Beyond Natural Language
To address this, we propose a neurosymbolic
approach that prompts LLMs to extract and encode
all relevant information from a problem statement as
logical code statements, and then use a logic programming
language (Prolog) to conduct the iterative computations of
explicit deductive reasoning.
[2407.11373] Reliable Reasoning Beyond Natural Language
The future of Prolog is bright?
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Now I wonder whether LLMs should be an
inch more informed by results from Neuro-
endocrinology research. I remember Marvin
Minsky publishing his ‘The Society of Mind’:
Introduction to ‘The Society of Mind’ https://www.youtube.com/watch?v=-pb3z2w9gDg
But this made me think about a multi agent
systems. Now with LLMs what about a new
connectionist and deep learning approach.
Plus Prolog for the pre frontal context (PFC).
But who can write a blue print? Now there
is this amazing guy called Robert M. Sapolsky
who recently published Determined: A Science
of Life without Free Will, who
calls consciousness just a hicup. His turtles
all the way down model is a tour de force
through an unsettling conclusion: We may not
grasp the precise marriage of nature and nurture
that creates the physics and chemistry at the
base of human behavior, but that doesn’t mean it
doesn’t exist. But the pre frontal context (PFC)
seems to be still quite brittle and not extremly
performant and quite energy hungry.
So Prolog might excell?
Determined: A Science of Life Without Free Will https://www.amazon.de/dp/0525560998
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
The carbon emissions of writing and illustrating
are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hold your breath, the bartender in your next
vacation destination will be most likely an AI
robot. Lets say in 5 years from now. Right?
Michael Sheen The Robot Bartender
https://www.youtube.com/watch?v=tV4Fxy5IyBM
Mild Shock schrieb:
The carbon emissions of writing and illustrating
are lower for AI than for humans
https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
What a bullshit:
Another concern is the potential for AI to displace
jobs and exacerbate economic inequality. A recent
study by McKinsey estimates that up to 800 million
jobs could be automated by 2030. While Murati believes
that AI will ultimately create more jobs than it
displaces, she acknowledges the need for policies to
support workers through the transition, such as job
retraining programs and strengthened social safety nets. https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/
Lets say there is a wine valley. All workers
are replaced by AI robots. Where do they go.
In some cultures you don't find people over
30 that are long life learners. What should they
learn, on another valley where they harvest
oranges, they also replaced everybody by AI
robots. And so on the next valley, and the
next valley. We need NGO's and a Greta Thunberg
for AI ethics, not a nice face from OpenAI.
The carbon emissions of writing and illustrating
are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
SAN FRANCISCO/NEW YORK, Sept 4 - Safe
Superintelligence (SSI), newly co-founded by OpenAI's
former chief scientist Ilya Sutskever, has raised $1
billion in cash to help develop safe artificial
intelligence systems that far surpass human
capabilities, company executives told Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/
Now they are dancing https://twitter.com/AIForHumansShow/status/1831465601782706352
Bye
Its amazing how we are in the mists of new buzzwords
such as superintelligence, superhuman, etc… I used
the term “long inferencing” in one post somewhere
for a combination of LLM with a more capable inferencing,
compared to current LLMs that rather show “short inferencing”.
Then just yesterday its was Strawberry and Orion, as the
next leap by OpenAI. Is the leap getting out of control?
OpenAI wanted to do “Superalignment” but lost a figure head.
Now there is new company which wants to do safety-focused
non-narrow AI. But they chose another name. If I translate
superhuman to German I might end with “Übermensch”,
first used by Nietzsche and later by Hitler and the
Nazi regime. How ironic!
Nick Bostrom - Superintelligence https://www.orellfuessli.ch/shop/home/artikeldetails/A1037878459
Mild Shock schrieb:
Hi,
SAN FRANCISCO/NEW YORK, Sept 4 - Safe
Superintelligence (SSI), newly co-founded by OpenAI's
former chief scientist Ilya Sutskever, has raised $1
billion in cash to help develop safe artificial
intelligence systems that far surpass human
capabilities, company executives told Reuters.
https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/
Now they are dancing
https://twitter.com/AIForHumansShow/status/1831465601782706352
Bye
Hi,
Not sure whether this cinematic master piece
contains a rendition when I was hunted recently
by a virus and had some hypomanic episodes.
But the chapter "Electromagnetic Waves" is fun:
Three Thousand Years of Longing https://youtu.be/id8-z5vANvc?si=h3mvNLs11UuY8HnD&t=3881
Bye
Mild Shock schrieb:
Its amazing how we are in the mists of new buzzwords
such as superintelligence, superhuman, etc… I used
the term “long inferencing” in one post somewhere
for a combination of LLM with a more capable inferencing,
compared to current LLMs that rather show “short inferencing”.
Then just yesterday its was Strawberry and Orion, as the
next leap by OpenAI. Is the leap getting out of control?
OpenAI wanted to do “Superalignment” but lost a figure head.
Now there is new company which wants to do safety-focused
non-narrow AI. But they chose another name. If I translate
superhuman to German I might end with “Übermensch”,
first used by Nietzsche and later by Hitler and the
Nazi regime. How ironic!
Nick Bostrom - Superintelligence
https://www.orellfuessli.ch/shop/home/artikeldetails/A1037878459
Mild Shock schrieb:
Hi,
SAN FRANCISCO/NEW YORK, Sept 4 - Safe
Superintelligence (SSI), newly co-founded by OpenAI's
former chief scientist Ilya Sutskever, has raised $1
billion in cash to help develop safe artificial
intelligence systems that far surpass human
capabilities, company executives told Reuters.
https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/
Now they are dancing
https://twitter.com/AIForHumansShow/status/1831465601782706352
Bye
The carbon emissions of writing and illustrating
are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Trump: They're eating the dogs, the cats https://www.youtube.com/watch?v=5llMaZ80ErY
https://twitter.com/search?q=trump+cat
You know USA has a problem,
when Oracle enters the race:
To source the 131,072 GPU Al "supercluster,"
Larry Ellison, appealed directly to Jensen Huang,
during a dinner joined by Elon Musk at Nobu.
"I would describe the dinner as me and Elon
begging Jensen for GPUs. Please take our money.
We need you to take more of our money. Please!” https://twitter.com/benitoz/status/1834741314740756621
Meanwhile a contender in Video GenAI
FLUX.1 from Germany, Hurray! With Open Source:
OK. Now I'm Scared... AI Better Than Reality https://www.youtube.com/watch?v=cvMAVWDD-DU
Mild Shock schrieb:
The carbon emissions of writing and illustrating
are lower for AI than for humans
https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
The blue are AfD, the green are:
German greens after losing badly https://www.dw.com/en/german-greens-suffer-major-loss-of-votes-in-eu-elections-nina-haase-reports/video-69316755
Time to start a yellow party, the first party
with an Artificial Intelligence Ethics agenda?
Bye
P.S.: Here I tried some pigwrestling with
ChatGPT demonstrating Mira Murati is just
a nice face. But ChatGPT is just like a child,
spamming me with large bullets list, from
its huge lexical memory, without any deep
understanding. But it also gave me an interesting
list of potential caliber AI critiques. Any new
Greta Thunberg of Artificial Intelligence
Ethics among them?
Mira Murati Education Background https://chatgpt.com/c/fbc385d4-de8d-4f29-b925-30fac75072d4
Mild Shock schrieb:
What a bullshit:
Another concern is the potential for AI to displace
jobs and exacerbate economic inequality. A recent
study by McKinsey estimates that up to 800 million
jobs could be automated by 2030. While Murati believes
that AI will ultimately create more jobs than it
displaces, she acknowledges the need for policies to
support workers through the transition, such as job
retraining programs and strengthened social safety nets.
https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/
Lets say there is a wine valley. All workers
are replaced by AI robots. Where do they go.
In some cultures you don't find people over
30 that are long life learners. What should they
learn, on another valley where they harvest
oranges, they also replaced everybody by AI
robots. And so on the next valley, and the
next valley. We need NGO's and a Greta Thunberg
for AI ethics, not a nice face from OpenAI.
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
will probably never get a Turing Award or something
for what I did 23 years ago. Why is its reading
count on research gate suddently going up?
Knowledge, Planning and Language,
November 2001
I guess because of this, the same topic takled by
Microsofts recent model GRIN. Shit. I really should
find some investor and pump up a start up!
"Mixture-of-Experts (MoE) models scale more
effectively than dense models due to sparse
computation through expert routing, selectively
activating only a small subset of expert modules." https://arxiv.org/pdf/2409.12136
But somehow I am happy with my dolce vita as
it is now... Or maybe I am decepting myself?
P.S.: From the GRIN paper, here you see how
expert domains modules relate with each other:
Figure 6 (b): MoE Routing distribution similarity
across MMLU 57 tasks for the control recipe.
How it started:
How Hezbollah used pagers and couriers to counter
July 9, 2024 https://www.reuters.com/world/middle-east/pagers-drones-how-hezbollah-aims-counter-israels-high-tech-surveillance-2024-07-09/
How its going:
What we know about the Hezbollah pager explosions
Sept 17, 2024
https://www.bbc.com/news/articles/cz04m913m49o
Mild Shock schrieb:
Trump: They're eating the dogs, the cats
https://www.youtube.com/watch?v=5llMaZ80ErY
https://twitter.com/search?q=trump+cat
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
was born around 1950. Here what happened in this decade:
1) "Perceptron":
Rosenblatt's perceptrons were initially simulated on an
IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
Mark I Perceptron machine, the first implementation of
the perceptron algorithm. It was connected to a camera
with 20×20 cadmium sulfide photocells to
make a 400-pixel image.
https://de.wikipedia.org/wiki/Perzeptron
2) "Voder"
The Bell Telephone Laboratory's Voder (abbreviation of
Voice Operating Demonstrator) was the first attempt to
electronically synthesize human speech by breaking it down
into its acoustic components. The Voder was developed from
research into compression schemes for transmission of voice
on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M
3) "Mini-Chess"
Los Alamos chess was the first chess-like game played by a
computer program. This program was written at Los Alamos
Scientific Laboratory by Paul Stein and Mark Wells for the
MANIAC I computer in 1956. The computer was primarily
constructed to perform calculations in support of hydrogen bomb
research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE
Bye
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Not only the speed doesn't double every year anymore,
also the density of transistors doesn't double
every year anymore. See also:
‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618
So there is some hope in FPGAs. The article writes:
"In the latter paper, which includes a great overview of
the state of the art, Pilch and colleagues summarize
this as shifting the processing from time to space —
from using slow sequential CPU processing to hardware
complexity, using the FPGA’s configurable fabric
and inherent parallelism."
In reference to (no pay wall):
An FPGA-based real quantum computer emulator
15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5
Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once.
https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
Hi,
Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
was born around 1950. Here what happened in this decade:
1) "Perceptron":
Rosenblatt's perceptrons were initially simulated on an
IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
Mark I Perceptron machine, the first implementation of
the perceptron algorithm. It was connected to a camera
with 20×20 cadmium sulfide photocells to
make a 400-pixel image.
https://de.wikipedia.org/wiki/Perzeptron
2) "Voder"
The Bell Telephone Laboratory's Voder (abbreviation of
Voice Operating Demonstrator) was the first attempt to
electronically synthesize human speech by breaking it down
into its acoustic components. The Voder was developed from
research into compression schemes for transmission of voice
on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M
3) "Mini-Chess"
Los Alamos chess was the first chess-like game played by a
computer program. This program was written at Los Alamos
Scientific Laboratory by Paul Stein and Mark Wells for the
MANIAC I computer in 1956. The computer was primarily
constructed to perform calculations in support of hydrogen bomb
research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE
Bye
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
Hi,
Next issue 2:1 scheduled for January 2025 https://www.iospress.com/catalog/journals/neurosymbolic-artificial-intelligence
What is Neuro-Symbolic AI? https://allegrograph.com/what-is-neuro-symbolic-ai/
Connectionists methods combined with symbolic
methods? BTW: Not something new really, but
nevertheless, the current times might ask for
more interdisciplinary work.
The article by Ron Sun, Dual-process theories,
cognitive architectures, and hybrid neural-
symbolic models, even admits it:
"This idea immediately harkens back to the 1990s
when hybrid models first emerged [..] Besides
being termed neural-symbolic or neurosymbolic models,
they have also been variously known as connectionist
symbolic model, hybrid symbolic neural networks,
or simply hybrid models or systems.
I argued back then and am still arguing today [..]
. In particular, within the human mental architecture,
we need to take into account dual processes (e.g.,
as has been variously termed as implicit versus explicit,
unconscious versus conscious, intuition versus reason,
System 1 versus System 2, and so on, albeit sometimes
with somewhat different connotations). Incidentally,
dual-process (or two-system) theories have become quite
popular lately."
Ok I will take a nap, and let my automatic
processing do the disgesting of what he wrote.
LoL
Mild Shock schrieb:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once.
https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
I told you so, not worth a dime:
I have something to share wit you. After much reflection,
I have made the difficut decision to leave OpenAI. https://twitter.com/miramurati/status/1839025700009030027
Who is stepping in with the difficult task, Sam Altman himself?
The Intelligence Age
September 23, 2024
https://ia.samaltman.com/
Mild Shock schrieb:
Hi,
The blue are AfD, the green are:
German greens after losing badly
https://www.dw.com/en/german-greens-suffer-major-loss-of-votes-in-eu-elections-nina-haase-reports/video-69316755
Time to start a yellow party, the first party
with an Artificial Intelligence Ethics agenda?
Bye
P.S.: Here I tried some pigwrestling with
ChatGPT demonstrating Mira Murati is just
a nice face. But ChatGPT is just like a child,
spamming me with large bullets list, from
its huge lexical memory, without any deep
understanding. But it also gave me an interesting
list of potential caliber AI critiques. Any new
Greta Thunberg of Artificial Intelligence
Ethics among them?
Mira Murati Education Background
https://chatgpt.com/c/fbc385d4-de8d-4f29-b925-30fac75072d4
Mild Shock schrieb:
What a bullshit:
Another concern is the potential for AI to displace
jobs and exacerbate economic inequality. A recent
study by McKinsey estimates that up to 800 million
jobs could be automated by 2030. While Murati believes
that AI will ultimately create more jobs than it
displaces, she acknowledges the need for policies to
support workers through the transition, such as job
retraining programs and strengthened social safety nets.
https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/
Lets say there is a wine valley. All workers
are replaced by AI robots. Where do they go.
In some cultures you don't find people over
30 that are long life learners. What should they
learn, on another valley where they harvest
oranges, they also replaced everybody by AI
robots. And so on the next valley, and the
next valley. We need NGO's and a Greta Thunberg
for AI ethics, not a nice face from OpenAI.
Chatgpt is definitely unreliable.
Hi,
Lets say I have to chose between pig wrestle with a
grammar nazi stackoverflow user with 100k reputation, or
to interact with ChatGPT that puts a lot of
effort to understand the least cue I give, isn't
shot in to english only, you can also use it with
german, turkish, etc.. what ever.
Who do I use as a programmimg companion, stackoverflow
or ChatGPT. I think ChatGPT is the clear winner,
it doesn't feature the abomination of a virtual
prison like stackoverflow. Or as Cycorp, Inc has put
it already decades ago:
Common Sense Reasoning – From Cyc to Intelligent Assistant
Doug Lenat et al. - August 2006
2 The Case for an Ambient Research Assistant
2.3 Components of a Truly Intelligent Computational Assistant
Natural Language:
An assistant system must be able to remember
questions, statements, etc. from the user, and
what its own response was, in order to understand
the kinds of language ‘shortcuts’ people normally use
in context.
https://www.researchgate.net/publication/226813714
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
The biggest flop in logic programming
history, scryer prolog is dead. The poor
thing is a prolog system without garbage
collection, not very useful. So how will
Austria get out of all this?
With 50 PhDs and 10 Postdocs?
"To develop its foundations, BILAI employs a
Bilateral AI approach, effectively combining
sub-symbolic AI (neural networks and machine learning)
with symbolic AI (logic, knowledge representation,
and reasoning) in various ways."
https://www.bilateral-ai.net/jobs/.
LoL
Mild Shock schrieb:
You know USA has a problem,
when Oracle enters the race:
To source the 131,072 GPU Al "supercluster,"
Larry Ellison, appealed directly to Jensen Huang,
during a dinner joined by Elon Musk at Nobu.
"I would describe the dinner as me and Elon
begging Jensen for GPUs. Please take our money.
We need you to take more of our money. Please!”
https://twitter.com/benitoz/status/1834741314740756621
Meanwhile a contender in Video GenAI
FLUX.1 from Germany, Hurray! With Open Source:
OK. Now I'm Scared... AI Better Than Reality
https://www.youtube.com/watch?v=cvMAVWDD-DU
Mild Shock schrieb:
The carbon emissions of writing and illustrating
are lower for AI than for humans
https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
Given that Scryer Prolog is dead.
This made me smile, traces of Scryer Prolog
are found in FLOPs 2024 proceedings:
7th International Symposium, FLOPS 2024,
Kumamoto, Japan, May 15–17, 2024, Proceedings https://www.cs.ox.ac.uk/jeremy.gibbons/flops2024.pdf
So why did it flop? Missing garbage collection
in the Prolog System? Or did or is it to estimate
that ChatGPT will also kill Scryer Prolog?
Or simply a problem of using Rust as the
underlying host language?
Bye
Hi,
Woa! ChatGPT for the Flintstones: Bloomberg
Our long-term investment in AI is already
available for fixed income securities.
Try it for yourself! https://twitter.com/TheTerminal/status/1783473601632465352
Did she just say Terminal? LoL
Bye
P.S.: But the display of the extracted logical
query from the natural language phrase is quite
cute. Can ChatGPT do the same?
Mild Shock schrieb:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once.
https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
Hi,
Given that Scryer Prolog is dead.
This made me smile, traces of Scryer Prolog
are found in FLOPs 2024 proceedings:
7th International Symposium, FLOPS 2024,
Kumamoto, Japan, May 15–17, 2024, Proceedings https://www.cs.ox.ac.uk/jeremy.gibbons/flops2024.pdf
So why did it flop? Missing garbage collection
in the Prolog System? Or did or is it to estimate
that ChatGPT will also kill Scryer Prolog?
Or simply a problem of using Rust as the
underlying host language?
Bye
Mild Shock schrieb:
The biggest flop in logic programming
history, scryer prolog is dead. The poor
thing is a prolog system without garbage
collection, not very useful. So how will
Austria get out of all this?
With 50 PhDs and 10 Postdocs?
"To develop its foundations, BILAI employs a
Bilateral AI approach, effectively combining
sub-symbolic AI (neural networks and machine learning)
with symbolic AI (logic, knowledge representation,
and reasoning) in various ways."
https://www.bilateral-ai.net/jobs/.
LoL
Mild Shock schrieb:
You know USA has a problem,
when Oracle enters the race:
To source the 131,072 GPU Al "supercluster,"
Larry Ellison, appealed directly to Jensen Huang,
during a dinner joined by Elon Musk at Nobu.
"I would describe the dinner as me and Elon
begging Jensen for GPUs. Please take our money.
We need you to take more of our money. Please!”
https://twitter.com/benitoz/status/1834741314740756621
Meanwhile a contender in Video GenAI
FLUX.1 from Germany, Hurray! With Open Source:
OK. Now I'm Scared... AI Better Than Reality
https://www.youtube.com/watch?v=cvMAVWDD-DU
Mild Shock schrieb:
The carbon emissions of writing and illustrating
are lower for AI than for humans
https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
I wonder wether WASM will be fast some time.
I found this paper which draws a rather dim picture,
but the paper is already a little bit old:
Understanding the Performance of WebAssembly Application
Weihang Wang et al. - 2021
https://weihang-wang.github.io/papers/imc21.pdf
The take away is more or less that jitted JavaScript has
same speed as WASM. With sometimes WASM more favorable
outcome for Firefox than for Chrome and Edge:
Java Script Chrome Firefox Edge
D.1 Exec. Time (ms) 45.57 48.26 63.62
M.2 Exec. Time (ms) 249.60 167.03 201.68
WASM Chrome Firefox Edge
D.1 Exec. Time (ms) 65.23 39.65 83.53
M.2 Exec. Time (ms) 233.08 345.98 192.87
Bye
Simplification is hard (IMO).
Hi,
Lets say I have to chose between pig wrestle with a
grammar nazi stackoverflow user with 100k reputation, or
to interact with ChatGPT that puts a lot of
effort to understand the least cue I give, isn't
shot in to english only, you can also use it with
german, turkish, etc.. what ever.
Who do I use as a programmimg companion, stackoverflow
or ChatGPT. I think ChatGPT is the clear winner,
it doesn't feature the abomination of a virtual
prison like stackoverflow. Or as Cycorp, Inc has put
it already decades ago:
Common Sense Reasoning – From Cyc to Intelligent Assistant
Doug Lenat et al. - August 2006
2 The Case for an Ambient Research Assistant
2.3 Components of a Truly Intelligent Computational Assistant
Natural Language:
An assistant system must be able to remember
questions, statements, etc. from the user, and
what its own response was, in order to understand
the kinds of language ‘shortcuts’ people normally use
in context.
https://www.researchgate.net/publication/226813714
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Hi,
Not only $TSLA is on fire sale! Also
Prolog system have capitualted long ago.
Scryer Prolog and Trealla Prolog copy
some old CLP(X) nonsense based on attributed
variables. SWI-Prolog isn't better off.
Basically the USA and their ICLP venue
is dumbing down all of Prolog development,
so that nonsense such as this is published:
Automatic Differentiation in Prolog
Schrijvers Tom et. al - 2023
https://arxiv.org/pdf/2305.07878
It has the most stupid conclusion.
"In future work we plan to explore Prolog’s meta-
programming facilities (e.g., term expansion) to
implement partial evaluation of revad/5 calls on
known expressions. We also wish to develop further
applications on top of our AD approach, such as
Prolog-based neural networks and integration with
existing probabilistic logic programming languages."
As if term expansion would do anything good
concerning the evaluation or training of neural
networks. They are totally clueless!
Bye
P.S.: The stupidity is even topped, that people
have unlearned how to do symbolic algebra
in Prolog itself. They are not able to code it:
?- simplify(x+x+y-y,E).
E = number(2)*x+y-y
Simplification is hard (IMO).
Instead they are now calling Python:
sym(A * B, S) :-
!, sym(A, A1),
sym(B, B1),
py_call(operator:mul(A1, B1), S).
mys(S, A * B) :-
py_call(sympy:'Mul', Mul),
py_call(isinstance(S, Mul), @(true)),
!, py_call(S:args, A0-B0),
mys(A0, A),
mys(B0, B).
Etc..
sympy(A, R) :-
sym(A, S),
mys(S, R).
?- sympy(x + y + 1 + x + y + -1, S).
S = 2*x+2*y ;
This is the final nail in the coffin, the declaration
of the complete decline of Prolog. Full proof that
SWI-Prolog Janus is indicative that we have reached
the valley of idiocracy in Prolog. And that there
are no more capable Prologers around.
Mild Shock schrieb:
Hi,
Lets say I have to chose between pig wrestle with a
grammar nazi stackoverflow user with 100k reputation, or
to interact with ChatGPT that puts a lot of
effort to understand the least cue I give, isn't
shot in to english only, you can also use it with
german, turkish, etc.. what ever.
Who do I use as a programmimg companion, stackoverflow
or ChatGPT. I think ChatGPT is the clear winner,
it doesn't feature the abomination of a virtual
prison like stackoverflow. Or as Cycorp, Inc has put
it already decades ago:
Common Sense Reasoning – From Cyc to Intelligent Assistant
Doug Lenat et al. - August 2006
2 The Case for an Ambient Research Assistant
2.3 Components of a Truly Intelligent Computational Assistant
Natural Language:
An assistant system must be able to remember
questions, statements, etc. from the user, and
what its own response was, in order to understand
the kinds of language ‘shortcuts’ people normally use
in context.
https://www.researchgate.net/publication/226813714
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 491 |
Nodes: | 16 (2 / 14) |
Uptime: | 146:49:45 |
Calls: | 9,694 |
Calls today: | 4 |
Files: | 13,732 |
Messages: | 6,178,670 |