Hacker Newsnew | past | comments | ask | show | jobs | submit | antononcube's commentslogin

Related Number Theory notebooks / discussions:

- «Numerically 2026 is unremarkable yet happy: semiprime with primitive roots» https://community.wolfram.com/groups/-/m/t/3594686

- «Happy √2²²-22 -- And other ways to calculate 2026» https://community.wolfram.com/groups/-/m/t/3599161


The integer 2026 is semiprime and a happy number, with 365 as one of its primitive roots. Although 2026 may not be particularly noteworthy in number theory, this provides a great excuse to create various elaborate visualizations that reveal some interesting aspects of the number.


Interesting variant. I might program it for some of the «Rock-Paper-Scissors extensions» here:

https://rakuforprediction.wordpress.com/2025/03/03/rock-pape...

Some of the extensions would need polyhedral dices:

https://demonstrations.wolfram.com/OpenDiceRolls/


This document (notebook) shows transformations of a movie dataset into a format more suitable for data analysis and for making a movie recommender system. It is the first of a three-part series of notebooks that showcase Raku packages for doing Data Science (DS).


Yes, Wolfram Language (WL) -- aka Mathematica -- introduced `Tabular` in 2025. It is a new data structure with a constellation of related functions (like `ToTabular`, `PivotToColumns`, etc.) Using it is 10÷100 times faster than using WL's older `Dataset` structure. (In my experience. With both didactic and real life data of 1_000÷100_000 rows and 10÷100 columns.)


This blog post (and related notebook) show how to utilize Large Language Model (LLM) Function Calling with the Raku package "LLM::Functions".

- Package: https://raku.land/zef:antononcube/LLM::Functions

- Notebook: https://github.com/antononcube/RakuForPrediction-blog/blob/m...


Mostly, because Python is not a good a "discovery" and prototyping language. It is like that by design -- Guido Van Rossum decided that TMTOWTDI is counter-productive.

Another point, which could have mentioned in my previous response -- Raku has more elegant and easy to use asynchronous computations framework.

IMO, Python's introspection matches that Raku's introspection.

Some argue that Python's LLM packages are more and better than Raku's. I agree on the "more" part. I am not sure about the "better" part:

- Generally speaking, different people prefer decomposing computations in a different way. - When few years ago I re-implemented Raku's LLM packages in Python, Python did not have equally convenient packages.


Ah, yes, Raku's "LLM::Graph" is heavily inspired by the design of the function LLMGraph of Wolfram Language (aka Mathematica.)

WL's LLMGraph is more developed and productized, but Raku's "LLM::Graph" is catching up.

I would like to say that "LLM::Graph" was relatively easy to program because of Raku's introspection, wrappers, asynchronous features, and pre-existing LLM functionalities packages. As a consequence the code of "LLM::Graph" is short.

Wolfram Language does not have that level introspection, but otherwise is likely a better choice mostly for its far greater scope of functionalities. (Mathematics, graphics, computable data, etc.)

In principle a corresponding Python "LLMGraph" package can be developed, for comparison purposes. Then the "better choice" question can be answered in a more informed manner. (The Raku packages "LLM::Functions" and "LLM::Prompts" have their corresponding Python packages implemented already.)


Specifications for asynchronous LLM computations with Raku's "LLM::Graph" detail how to manage complex, multi-step LLM workflows by representing them as graphs. By defining the workflow as a graph, developers can execute LLM function calls concurrently, enabling higher throughput and lower latency than synchronous, step-by-step processes.

"LLM::Graph" uses a graph structure to manage dependencies between tasks, where each node represents a computation and edges dictate the flow. Asynchronous behavior is a default feature, with specific options available for control.


What is better in Raku than Python? Did you use any of the dedicated Raku LLM packages?


I think Raku is better than Python agent based systems for a few reasons:

- You don't have to think about concurrency or multithreading as in Python. There is no GIL to worry about. The built in support for things Supply and hyper-operators are all available in the language. It is really easy to hook up disparate parts of a distributed agent without having to think about async or actors libraries or whatever in Python.

- Something I prefer is the OOP abstractions in Raku. They are much richer than Python. YMMV, depending on what you prefer.

- Better support for gradual typing and constraints out of the box in Raku.

Python wins on the AI ecosystem though :)

I started messing around with this code several years ago and the LLM libs in Raku were not as rich as today. I thought I needed a specific type of LLM message handling structure that could be extended to do tool handling and some of Letta type memory management (which I never got around to!). I have some Python libs of my own and I ported them. I suspect if I was starting now, I would use what is available in the community. This version of TallMountain is the last of a long series of prototypes, so I never rewrote those parts.


Nice to see others who think that Raku is a good fit for LLM ... I have had some success integrating LLM::DWIM (a raku command line LLM client built on LLM::Functions etc) with a DSL approach to make a command line calculator based on Raku Grammars.

  > crag
  > ?^<elephant mass in kg> / ?^<mouse mass in kg>    #300000①
  > ?^<speed of a flying swallow in mph>              #30mph
https://github.com/librasteve/raku-App-Crag

PS. Raku has Inline::Python where you need a lib from the Python ecosystem (which I am sure you know, but in case others are curious)


God to know.

BTW, several years ago the LLM-revolution didn't happen yet. Raku started to have sound LLM packages circa March-May 2023.


Yes indeed. I was already poking around with GPT-3 sometime in 2022. I can’t even remember exactly when. Feels like ages ago now!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: