In particular, Metabase and Superset can be deployed with DuckDB support. You mentioned customer facing dashboards, note that Metabase embedded is not free. Just to say, our SeekTable also has DuckDB connector (and can be used as an embedded BI).
Websets are cool - I remember that 2 decades ago there was a project in Google Labs that tried to return google search results as 'objects' x 'properties' but it never left their research sandbox (cannot remember project's name unfortunately).
Searches that give tabular results can be cheap if you already have structured datasets (extracted from crawled data), so LLM can simply convert the user's natural language query to SQL query (or SQL-like query) which can be cost-efficiently executed - say, with DuckDB. This approach can also give more correct results - as values in these structured datasets can be validated in the background, not as an individual 'deep research' task.
I understand that this is another kind of search service, however, this can be a way to offer free/cheap searches for users who don't need expensive individual research tasks.
Common approach is to use a data warehouse that is suitable for executing OLAP queries (to load/calculate data for your dashboards) in seconds. In simplest cases ('small data') this can be simply app's database replica - a separate (read-only) DB server that is used for reporting/analytics purposes. It is easy to configure master-slave replication with built-in SQL Server/PostgreSql/Mysql capabilities.
If replica server cannot execute queries fast enough, a specialized (optimized for OLAP-workload) database should be used instead. This can be a cloud service (like BigQuery, Redshift, Snowflake, Motherduck) or self-hosted solutions (ClickHouse, PostgreSql with pg_analytics extension, or even in-process DuckDB). Data sync is performed either with scheduled full-copy (simple, but not suitable for near real-time analytics) or via CDC (see Airbyte).
Just tried gemma3:4b for structured output and it fails with a strange error ( ollama is the latest):
Ollama error: POST predict: Post "http://127.0.0.1:49675/completion": read tcp 127.0.0.1:49677->127.0.0.1:49675: wsarecv: An existing connection was forcibly closed by the remote host.
Not sure this is Ollama or gemma3:4b problem. At the same time, gemma3:12b works fine for the same API request (100% identical, only difference is model id).
This is an extension for standard netcore DI-container. It is possible to have declarative JSON definitions of components (services) in _addition_ to the existing code-based approach.
For example, usage of JSON IoC config may be useful for applications with plugin architecture, or when parts of application are generated from some kind of domain-specific models.
> everything is translated to raw sql then pushed to the database layer
All ROLAP-kind of BI tools do that (including PowerBI when it uses direct-query connection mode), it is expected that underlying data sources are fast enough to handle these aggregate queries very quickly. In fact this approach may be used even with non-OLAP databases (like PostgreSql or SQLServer) and specialized analytical datastore is needed only for really big datasets (BigQuery, Snowflake, ClickHouse etc). In many cases correct usage of report parameters that can filter DB records by indexed columns OR usage of pre-aggregated materialized views, or tuning of SQL query generation (say, avoid JOINs and SQL-calculations when they are not needed for the concrete report) can solve performance issues.
This doesn't mean that Excel's PivotTable (and SSAS cubes) is good and ROLAP-kind pivot tables are bad because their applications are different. In cases when pivot tables should show actual (near real-time) data and this is main purpose of this kind of reports in BI tools; when users need to explore some dataset in a disconnected mode they always may export concrete report's data to Excel - in fact, some BI tools can export their internal pivot table into Excel file with pre-configured PivotTable.
search-driven analytics is not really a new thing and products in this space were before LLM-era. This kind of interface can be useful for some categories of users but it is not a game-changer - prompts cannot replace Excel and its pivot tables, and in fact typing prompts may be even more complicated for users than good old 'clicks'.
You may try these online pivot tables https://www.seektable.com where you can re-order rows/columns simply with a click on the header, apply filters via simple input where you can specify which items to keep or exclude.
PBI can be accessed via XMLA endpoint which can be consumed by many old components that previously were used with SSRS + has 'dataset execute query' REST API, so in this meaning PBI can be used as headless BI. Don't know much about Tableau/Qlik API, do they provide API for querying their internal semantic model?