• NVIDIA Jetson Orin controlled by Prolog

    From Mild Shock@21:1/5 to All on Fri Jan 3 22:20:10 2025
    Hi,

    Ok this one is only 250 bucks for a TPU:

    Introducing NVIDIA Jetson Orin™ Nano Super https://www.youtube.com/watch?v=S9L2WGf1KrM

    Now I am planning to do the following:

    Create a tensor flow Domain Specific Language (DSL).

    With these use cases:

    - Run the tensor flow DSL locally in
    your Prolog system interpreted.

    - Run the tensor flow DSL locally in
    your Prolog system compiled.

    - Run the tensor flow DSL locally on
    your Tensor Processing Unit (TPU).

    - Run the tensor flow DSL remotely
    on a compute server.

    - What else?

    Maybe also support some ONNX file format?

    Bye

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mild Shock@21:1/5 to Mild Shock on Fri Jan 3 22:31:10 2025
    Hi,

    Maybe one can get a better grip of an intimate
    relationship, simply by some hands on?

    Linear Algebraic Approaches to Logic Programming

    Katsumi Inoue (National Institute of Informatics, Japan)

    Abstract: Integration of symbolic reasoning and machine
    learning is important for robust AI. Realization of
    symbolic reasoning based on algebraic methods is promising
    to bridge between symbolic reasoning and machine learning,
    since algebraic data structures have been used in machine
    learning. To this end, Sakama, Inoue and Sato have defined
    notable relations between logic programming and linear
    algebra and have proposed algorithms to compute logic
    programs numerically using tensors. This work has been
    extended in various ways, to compute supported and stable
    models of normal logic programs, to enhance the efficiency
    of computation using sparse methods, and to enable abduction
    for abductive logic programming. A common principle in
    this approach is to formulate logical formulas as vectors/
    matrices/tensors, and linear algebraic operations are
    applied on these elements for computation of logic programming.
    Partial evaluation can be realized in parallel and by
    self-multiplication, showing the potential for exponential
    speedup. Furthermore, the idea to represent logic programs
    as tensors and matrices and to transform logical reasoning
    to numeric computation can be the basis of the differentiable
    methods for learning logic programs.

    https://www.iclp24.utdallas.edu/invited-speakers/

    Bye

    Mild Shock schrieb:
    Hi,

    Ok this one is only 250 bucks for a TPU:

    Introducing NVIDIA Jetson Orin™ Nano Super https://www.youtube.com/watch?v=S9L2WGf1KrM

    Now I am planning to do the following:

    Create a tensor flow Domain Specific Language (DSL).

    With these use cases:

    - Run the tensor flow DSL locally in
      your Prolog system interpreted.

    - Run the tensor flow DSL locally in
      your Prolog system compiled.

    - Run the tensor flow DSL locally on
      your Tensor Processing Unit (TPU).

    - Run the tensor flow DSL remotely
      on a compute server.

    - What else?

    Maybe also support some ONNX file format?

    Bye

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mild Shock@21:1/5 to Mild Shock on Sun Jan 5 20:08:41 2025
    John Sowa shows clear signs of coping problems. We just
    have an instance of “The Emperor’s New Clothes” some
    companies have become naked with the advent of GPT,

    I don’t think it is productive to postulate
    some CANNOT like here:

    Linguists say that LLMs cannot be a language mode'.
    - Tensors do not make the linguistic information explicit.
    - They do not distinguish the syntax, sernantics, and ontology.
    - GPT cannot use the 60• years of Al research and development. https://www.youtube.com/watch?v=6K6F_zsQ264

    Then in the next slide he embraces tensors for
    his new Prolog system nevertheless. WTF! Basically
    this is a very narrow narrative, which is totally

    unfounded in my opinion. Just check out these papers:

    GRIN: GRadient-INformed MoE
    [2409.12136] GRIN: GRadient-INformed MoE
    https://arxiv.org/abs/2409.12136

    A Survey on Mixture of Experts
    [2407.06204] A Survey on Mixture of Experts
    https://arxiv.org/abs/2407.06204

    This paints a totally different picture of LLMs, seems
    they are more in the tradition of CYC by Douglas Lenant.

    Mild Shock schrieb:
    Hi,

    Maybe one can get a better grip of an intimate
    relationship, simply by some hands on?

    Linear Algebraic Approaches to Logic Programming

    Katsumi Inoue (National Institute of Informatics, Japan)

    Abstract: Integration of symbolic reasoning and machine
    learning is important for robust AI.  Realization of
    symbolic reasoning based on algebraic methods is promising
    to bridge between symbolic reasoning and machine learning,
    since algebraic data structures have been used in machine
    learning. To this end, Sakama, Inoue and Sato have defined
    notable relations between logic programming and linear
    algebra and have proposed algorithms to compute logic
    programs numerically using tensors.  This work has been
    extended in various ways, to compute supported and stable
    models of normal logic programs, to enhance the efficiency
    of computation using sparse methods, and to enable abduction
    for abductive logic programming.  A common principle in
    this approach is to formulate logical formulas as vectors/
    matrices/tensors, and linear algebraic operations are
    applied on these elements for computation of logic programming.
    Partial evaluation can be realized in parallel and by
    self-multiplication, showing the potential for exponential
    speedup.  Furthermore, the idea to represent logic programs
    as tensors and matrices and to transform logical reasoning
    to numeric computation can be the basis of the differentiable
    methods for learning logic programs.

    https://www.iclp24.utdallas.edu/invited-speakers/

    Bye

    Mild Shock schrieb:
    Hi,

    Ok this one is only 250 bucks for a TPU:

    Introducing NVIDIA Jetson Orin™ Nano Super
    https://www.youtube.com/watch?v=S9L2WGf1KrM

    Now I am planning to do the following:

    Create a tensor flow Domain Specific Language (DSL).

    With these use cases:

    - Run the tensor flow DSL locally in
       your Prolog system interpreted.

    - Run the tensor flow DSL locally in
       your Prolog system compiled.

    - Run the tensor flow DSL locally on
       your Tensor Processing Unit (TPU).

    - Run the tensor flow DSL remotely
       on a compute server.

    - What else?

    Maybe also support some ONNX file format?

    Bye


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)