You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here's a short, hands-on example showing personas, agents, a simple rule (orchestrator), and a tool.
35
+
Here's a short, hands-on example where a support agent helps a customer disputing a double charge—we'll add a tiny rule to steer refund behavior and a simple tool to check order status, generate three dialogs for later evaluation, and then serve the support agent on port 1333 for Open WebUI or any OpenAI‑compatible client.
36
36
37
37
```python
38
38
import sdialog
39
39
from sdialog import Context
40
40
from sdialog.agents import Agent
41
-
from sdialog.personas importPersona
41
+
from sdialog.personas importSupportAgent, Customer
42
42
from sdialog.orchestrators import SimpleReflexOrchestrator
43
43
44
-
# First, let's set our preferred backend/model and parameters
dialog.print(orchestration=True) # pretty print each dialog
107
+
108
+
# Finally, let's serve the support agent to interact with real users (OpenAI-compatible API)
109
+
# Point Open WebUI or any OpenAI-compatible client to: http://localhost:1333
110
+
# Model name will appear as "Support:latest" (AGENT_NAME:latest).
111
+
support_agent.serve(1333)
82
112
```
83
113
> [!NOTE]
84
114
> - See [orchestration tutorial](https://github.com/idiap/sdialog/blob/main/tutorials/3.multi-agent%2Borchestrator_generation.ipynb) and [agents with tools and thoughts](https://github.com/idiap/sdialog/blob/main/tutorials/7.agents_with_tools_and_thoughts.ipynb).
115
+
> - Serving agents: more details on the OpenAI/Ollama-compatible API in the docs: [Serving Agents](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#serving-agents).
85
116
> - Dialogs are [rich objects](https://sdialog.readthedocs.io/en/latest/api/sdialog.html#sdialog.Dialog) with helper methods (filter, slice, transform, etc.) that can be easily exported and loaded.
86
117
> - Next: see [Loading and saving dialogs](#loading-and-saving-dialogs) and [Auto-generating personas and contexts](#auto-generating-personas-and-contexts) for persistence and controlled diversity.
87
118
119
+
### 🧪 Testing remote systems with simulated users
120
+
121
+
Use SDialog as a controllable test harness for any OpenAI‑compatible system such as vLLM-based ones—role‑play realistic or adversarial users against your deployed system:
122
+
123
+
* Black‑box functional checks (Does the system follow instructions? Handle edge cases?)
124
+
* Persona / use‑case coverage (Different goals, emotions, domains)
125
+
* Regression testing (Run the same persona batch each release; diff dialogs)
126
+
* Safety / robustness probing (Angry, confused, or noisy users)
127
+
* Automated evaluation (Pipe generated dialogs directly into evaluators below)
128
+
129
+
Core idea: wrap your system as an `Agent`, talk to it with simulated user `Agent`s, and capture `Dialog`s you can save, diff, and score.
130
+
131
+
Below is a minimal example where our simulated customer interacts once with your hypothetical remote endpoint:
132
+
133
+
```python
134
+
# Our remote system (your conversational backend exposing an OpenAI-compatible API)
135
+
system = Agent(
136
+
model="your/model", # Model name exposed by your server
137
+
openai_api_base="http://your-endpoint.com:8000/v1", # Base URL of the service
138
+
openai_api_key="EMPTY", # Or a real key if required
139
+
name="System"
140
+
)
141
+
142
+
# Let's make the system talk to our simulated customer defined in the example above.
143
+
dialog = system.dialog_with(simulated_customer)
144
+
dialog.to_file("dialog_0.json")
145
+
```
146
+
147
+
Next, evaluate these dialogs or orchestrate agents with more complex flows using rule/LLM hybrid orchestrators (see [tutorials 3 & 7](https://github.com/idiap/sdialog/tree/main/tutorials)).
148
+
149
+
88
150
### 💾 Loading and saving dialogs
89
151
90
152
[Dialog](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#dialog)s are JSON‑serializable and can be created from multiple formats. After generating one you can persist it, then reload later for evaluation, transformation, or mixing with real data.
avg_words_turn =sum(len(turn) for turn in dialog) /len(dialog)
118
180
```
119
181
120
-
### 🧬 Auto-generating personas and contexts
121
-
122
-
Use [generators](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#attribute-generators) to fill in (or selectively control) persona/context attributes using LLMs or other data sources (functions, CSV files, inline prompts). The `.set()` method lets you override how individual attributes are produced.
123
-
124
-
```python
125
-
from sdialog.personas import Doctor, Patient
126
-
from sdialog.generators import PersonaGenerator, ContextGenerator
127
-
from sdialog import Context
128
-
129
-
# By default, unspecified attributes are LLM generated
goals="{llm:Suggest a realistic goal for the context}"# targeted LLM instruction
139
-
)
140
-
ctx = ctx_gen.generate()
141
-
```
142
-
143
-
> [!TIP]
144
-
> 🕹️ 👉 Try the [demo notebook](https://colab.research.google.com/github/idiap/sdialog/blob/main/tutorials/0.demo.ipynb) to experiment with generators.
145
-
146
-
### 🧪 Testing remote systems with simulated users
147
-
148
-
SDialog can also easily act as a controllable test harness for any (OpenAI‑compatible) conversational backend. Create realistic or adversarial user personas to role‑play against your deployed system:
149
-
150
-
* Black‑box functional checks (Does the system follow instructions? Handle edge cases?)
151
-
* Persona / use‑case coverage (Different goals, emotions, domains)
152
-
* Regression testing (Run the same persona batch each release; diff dialogs)
153
-
* Safety / robustness probing (Angry, confused, or noisy users)
154
-
* Automated evaluation (Pipe generated dialogs directly into evaluators below)
155
-
156
-
Core idea: your remote system is wrapped as an `Agent`; simulated users are `Agent`s with personas producing diverse conversation trajectories, all recorded as `Dialog` objects you can save, diff, and score.
157
-
158
-
Below is a minimal example where an "angry customer" interacts once with a mock remote endpoint:
159
-
160
-
```python
161
-
# Our remote system (your conversational backend exposing an OpenAI-compatible API)
162
-
system = Agent(
163
-
model="my/super-llm", # Model name exposed by your server
164
-
openai_api_base="http://my-endpoint.com:8000/v1", # Base URL of the service
165
-
openai_api_key="EMPTY", # Or a real key if required
166
-
name="System"
167
-
)
168
-
169
-
# Let's manually define one (minimal) synthetic customer persona
170
-
angry_customer = Customer(
171
-
name="Riley",
172
-
issue="Billing error on last invoice",
173
-
issue_description="Charged twice for the same month",
# Let's make the system talk to our simulated customer once
181
-
dialog = system.dialog_with(simulated_customer)
182
-
dialog.to_file("dialog_0.json")
183
-
```
184
-
185
-
Next, evaluate these dialogs or orchestrate agents with more complex flows using rule/LLM hybrid orchestrators (see [tutorials 3 & 7](https://github.com/idiap/sdialog/tree/main/tutorials)).
186
-
187
182
## 📊 Evaluate and compare
188
183
189
184
Use [built‑in metrics](https://sdialog.readthedocs.io/en/latest/api/sdialog.html#module-sdialog.evaluation) (readability, flow, linguistic features, LLM judges) or easily create new ones, then aggregate and compare datasets via `DatasetComparator`.
@@ -212,6 +207,34 @@ comparator.plot()
212
207
> [!TIP]
213
208
> See [evaluation tutorial](https://github.com/idiap/sdialog/blob/main/tutorials/5.evaluation.ipynb).
214
209
210
+
211
+
### 🧬 Auto-generating personas and contexts
212
+
213
+
Use [generators](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#attribute-generators) to fill in (or selectively control) persona/context attributes using LLMs or other data sources (functions, CSV files, inline prompts). The `.set()` method lets you override how individual attributes are produced.
214
+
215
+
```python
216
+
from sdialog.personas import Doctor, Patient
217
+
from sdialog.generators import PersonaGenerator, ContextGenerator
218
+
from sdialog import Context
219
+
220
+
# By default, unspecified attributes are LLM generated
goals="{llm:Suggest a realistic goal for the context}"# targeted LLM instruction
230
+
)
231
+
ctx = ctx_gen.generate()
232
+
```
233
+
234
+
> [!TIP]
235
+
> 🕹️ 👉 Try the [demo notebook](https://colab.research.google.com/github/idiap/sdialog/blob/main/tutorials/0.demo.ipynb) to experiment with generators.
236
+
237
+
215
238
## 🧠 Mechanistic interpretability
216
239
217
240
Attach Inspectors to capture per‑token activations and optionally steer (add/ablate directions) to analyze or intervene in model behavior.
@@ -244,7 +267,8 @@ agent_steered("You are an extremely upset assistant") # Agent "can't get angry
244
267
> [!TIP]
245
268
> See [the tutorial](https://github.com/idiap/sdialog/blob/main/tutorials/6.agent%2Binspector_refusal.ipynb) on using SDialog to remove the refusal capability from LLaMA 3.2.
246
269
247
-
## 🔧 Interoperability
270
+
271
+
## 🔌 Backends and configuration
248
272
249
273
Many [backends supported](https://sdialog.readthedocs.io/en/latest/sdialog/index.html#configuration-layer), just use `"BACKEND:MODEL"` string format to either set a global default LLM for all components or pass one to each component:
To accelerate open, rigorous, and reproducible conversational AI research, SDialog invites the community to collaborate and help shape the future of open dialogue generation.
@@ -342,12 +368,14 @@ If you use SDialog in academic work, please cite:
342
368
}
343
369
``` -->
344
370
371
+
345
372
## 🙏 Acknowledgments
346
373
347
374
This work was supported by the European Union Horizon 2020 project [ELOQUENCE](https://eloquenceai.eu/about/) (grant number 101070558).
348
375
349
376
The initial development of this project began in preparation for the 2025 Jelinek Memorial Summer Workshop on Speech and Language Technologies ([JSALT 2025](https://jsalt2025.fit.vut.cz/)) as part of the ["Play your Part" research group](https://jsalt2025.fit.vut.cz/play-your-part).
@@ -87,6 +88,8 @@ Let's start with something fun and straightforward—creating a simple dialogue
87
88
dialog = alice.dialog_with(mentor, max_turns=6)
88
89
dialog.print()
89
90
91
+
Individual agents can be served and exposed as a OpenAI compatible API endpoint with the :meth:`~sdialog.agents.Agent.serve` method (e.g. ``mentor.serve(1333)``), see :ref:`here <serving_agents>` for more details.
92
+
90
93
Few-Shot Learning with Example Dialogs
91
94
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
92
95
Now let's explore one of SDialog's most powerful features! We can guide our dialogues by providing examples that show the system what style, structure, or format we want. This technique, called few-shot learning, works by supplying ``example_dialogs`` to generation components. These exemplar dialogs are injected into the system prompt to steer tone, task format, and conversation flow.
Copy file name to clipboardExpand all lines: docs/sdialog/index.rst
+46Lines changed: 46 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -234,6 +234,7 @@ Key Methods:
234
234
- :meth:`~sdialog.agents.Agent.prompt`: get the underlying system prompt used by the agent.
235
235
- :meth:`~sdialog.agents.Agent.json`: export the agent as a JSON object.
236
236
237
+
237
238
Orchestration
238
239
-------------
239
240
Orchestrators are lightweight controllers that examine the current dialog state and the last utterance from the other agent, optionally returning an instruction. They can be **ephemeral** (one-time) or **persistent** (lasting across multiple turns). Orchestrators are composed using the pipe operator:
@@ -317,6 +318,51 @@ This simple example shows the minimal pattern: return an instruction once when a
You can expose any Agent over an OpenAI/Ollama-compatible REST API using the :meth:`~sdialog.agents.Agent.serve` method to talk to it from tools like Open WebUI, Ollama GUI, or simple HTTP clients.
327
+
328
+
.. code-block:: python
329
+
330
+
from sdialog.agents import Agent
331
+
332
+
# Let's create an example agent
333
+
support = Agent(name="Support")
334
+
335
+
# And serve it on port 1333 (default host 0.0.0.0)
336
+
support.serve(1333)
337
+
# Connect client to base URL localhost:1333
338
+
339
+
For example, to run Open WebUI in docker locally, just set OLLAMA_BASE_URL to point to port 1333 in the same machine when launching the container:
340
+
341
+
.. code-block:: bash
342
+
343
+
docker run -e OLLAMA_BASE_URL=http://host.docker.internal:1333 \
0 commit comments