Skip to content

Commit b907f54

Browse files
Update main docs index.rst
1 parent 2a7adb4 commit b907f54

File tree

1 file changed

+14
-13
lines changed

1 file changed

+14
-13
lines changed

docs/index.rst

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -4,26 +4,27 @@
44
:align: right
55

66

7-
SDialog: Synthetic Dialog Generation, Evaluation, and Interpretability
8-
=======================================================================
7+
8+
SDialog: A Python Toolkit for End-to-End Dialogue Generation, Agent Building, Simulation, and Evaluation
9+
=======================================================================================================
910

1011
SDialog is an MIT-licensed open-source toolkit for building, simulating, and evaluating LLM-based conversational agents end-to-end. It aims to bridge **agent construction → dialog generation → evaluation → (optionally) interpretability** in a single reproducible workflow, so you can generate reliable, controllable dialog systems or data at scale.
1112

12-
It standardizes a Dialog schema and offers personadriven multiagent simulation with LLMs, composable orchestration, builtin metrics, and mechanistic interpretability.
13+
It standardizes a Dialog schema and offers persona-driven multi-agent simulation with LLMs, composable orchestration, built-in metrics, and mechanistic interpretability.
1314

1415
✨ Key Features
1516
---------------
1617

1718
- **Standard dialog schema** with JSON import/export *(aiming to standardize dialog dataset formats with your help 🙏)*
18-
- **Personadriven multiagent simulation** with contexts, tools, and thoughts
19+
- **Persona-driven multi-agent simulation** with contexts, tools, and thoughts
1920
- **Composable orchestration** for precise control over behavior and flow
20-
- **Builtin evaluation** (metrics + LLM‑as‑judge) for comparison and iteration
21+
- **Built-in evaluation** (metrics + LLM-as-judge) for comparison and iteration
2122
- **Native mechanistic interpretability** (inspect and steer activations)
2223
- **Easy creation of user-defined components** by inheriting from base classes (personas, metrics, orchestrators, etc.)
2324
- **Interoperability** across OpenAI, Hugging Face, Ollama, AWS Bedrock, Google GenAI, Anthropic, and more
2425
- **Audio generation** for converting text dialogs to realistic audio conversations
2526

26-
If you are building conversational systems, benchmarking dialog models, producing synthetic training corpora, simulating diverse users to test or probe conversational systems, or analyzing internal model behavior, SDialog provides an end‑to‑end workflow.
27+
If you are building conversational systems, benchmarking dialog models, producing synthetic training corpora, simulating diverse users to test or probe conversational systems, or analyzing internal model behavior, SDialog provides an end-to-end workflow.
2728

2829
Quick Links
2930
-----------
@@ -64,7 +65,7 @@ Alternatively, a ready-to-use Apptainer image (.sif) with SDialog and all depend
6465
🏁 Quickstart Tour
6566
------------------
6667

67-
Here's a short, handson example: a support agent helps a customer disputing a double charge. We add a small refund rule and two simple tools, generate three dialogs for evaluation, then serve the agent on port 1333 for Open WebUI or any OpenAIcompatible client.
68+
Here's a short, hands-on example: a support agent helps a customer disputing a double charge. We add a small refund rule and two simple tools, generate three dialogs for evaluation, then serve the agent on port 1333 for Open WebUI or any OpenAI-compatible client.
6869

6970
.. code-block:: python
7071
@@ -140,12 +141,12 @@ Core Capabilities
140141
Testing Remote Systems
141142
^^^^^^^^^^^^^^^^^^^^^^
142143

143-
Probe OpenAIcompatible deployed systems with controllable simulated users and capture dialogs for evaluation.
144+
Probe OpenAI-compatible deployed systems with controllable simulated users and capture dialogs for evaluation.
144145

145-
You can use SDialog as a controllable test harness for any OpenAIcompatible system such as **vLLM**-based ones by roleplaying realistic or adversarial users against your deployed system:
146+
You can use SDialog as a controllable test harness for any OpenAI-compatible system such as **vLLM**-based ones by role-playing realistic or adversarial users against your deployed system:
146147

147-
- Blackbox functional checks (Does the system follow instructions? Handle edge cases?)
148-
- Persona / usecase coverage (Different goals, emotions, domains)
148+
- Black-box functional checks (Does the system follow instructions? Handle edge cases?)
149+
- Persona / use-case coverage (Different goals, emotions, domains)
149150
- Regression testing (Run the same persona batch each release; diff dialogs)
150151
- Safety / robustness probing (Angry, confused, or noisy users)
151152
- Automated evaluation (Pipe generated dialogs directly into evaluators)
@@ -194,7 +195,7 @@ Import, export, and transform dialogs from JSON, text, CSV, or Hugging Face data
194195
Evaluation and Comparison
195196
^^^^^^^^^^^^^^^^^^^^^^^^^^
196197

197-
Score dialogs with builtin metrics and LLM judges, and compare datasets with aggregators and plots.
198+
Score dialogs with built-in metrics and LLM judges, and compare datasets with aggregators and plots.
198199

199200
.. code-block:: python
200201
@@ -223,7 +224,7 @@ Score dialogs with built‑in metrics and LLM judges, and compare datasets with
223224
Mechanistic Interpretability
224225
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
225226

226-
Capture pertoken activations and steer models via Inspectors for analysis and interventions.
227+
Capture per-token activations and steer models via Inspectors for analysis and interventions.
227228

228229
.. code-block:: python
229230

0 commit comments

Comments
 (0)