Hervé BREDIN commited on
Commit
dba06d0
1 Parent(s): e8e9eaf

feat: initial import

Browse files
Files changed (2) hide show
  1. README.md +99 -0
  2. pytorch_model.bin +3 -0
README.md CHANGED
@@ -1,3 +1,102 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - pyannote
4
+ - pyannote-audio
5
+ - pyannote-audio-model
6
+ - audio
7
+ - voice
8
+ - speech
9
+ - speaker
10
+ - speaker-diarization
11
+ - speaker-change-detection
12
+ - speaker-segmentation
13
+ - voice-activity-detection
14
+ - overlapped-speech-detection
15
+ - resegmentation
16
  license: mit
17
+ inference: false
18
+ extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.audio userbase and help its maintainers apply for grants to improve it further. If you are an academic researcher, please cite the relevant papers in your own publications using the model. If you work for a company, please consider contributing back to pyannote.audio development (e.g. through unrestricted gifts). We also provide scientific consulting services around speaker diarization and machine listening."
19
+ extra_gated_fields:
20
+ Company/university: text
21
+ Website: text
22
+ I plan to use this model for (task, type of audio data, etc): text
23
  ---
24
+
25
+ We propose (paid) scientific [consulting services](https://herve.niderb.fr/consulting.html) to companies willing to make the most of their data and open-source speech processing toolkits (and `pyannote` in particular).
26
+
27
+ # 🎹 Speaker segmentation with powerset encoding
28
+
29
+ The various concepts behind this model are described in details in this [paper](https://www.isca-speech.org/archive/interspeech_2023/plaquet23_interspeech.html).
30
+
31
+ It ingests (ideally 10s of) mono audio sampled at 16kHz and outputs speaker diarization as a (num_frames, num_classes) matrix where the 7 classes are _non-speech_, _speaker #1_, _speaker #2_, _speaker #3_, _speakers #1 and #2_, _speakers #1 and #3_, and s_peakers #2 and #3_
32
+
33
+ It has been trained by Séverin Baroudi with [pyannote.audio](https://github.com/pyannote/pyannote-audio) `3.0.0` using the combination of the training sets of AISHELL, AliMeeting, AMI, AVA-AVD, DIHARD, Ego4D, MSDWild, REPERE, and VoxConverse.
34
+
35
+ ## Usage
36
+
37
+ ```python
38
+ # 1. visit hf.co/pyannote/segmentation-3.0.0 and accept user conditions
39
+ # 2. visit hf.co/settings/tokens to create an access token
40
+ # 3. instantiate pretrained model
41
+ from pyannote.audio import Model
42
+ model = Model.from_pretrained("pyannote/segmentation-3.0.0",
43
+ use_auth_token="ACCESS_TOKEN_GOES_HERE")
44
+ ```
45
+
46
+ ### Speaker diarization
47
+
48
+ This model cannot be used to perform speaker diarization of full recordings on its own (it only processes 10s chunk).
49
+
50
+ See [pyannote/speaker-diarization-3.0.0](https://hf.co/pyannote/speaker-diarization-3.0.0) pipeline that uses an additional speaker embedding model to perform full recording speaker diarization.
51
+
52
+ ### Voice activity detection
53
+
54
+ ```python
55
+ from pyannote.audio.pipelines import VoiceActivityDetection
56
+ pipeline = VoiceActivityDetection(segmentation=model)
57
+ HYPER_PARAMETERS = {
58
+ # remove speech regions shorter than that many seconds.
59
+ "min_duration_on": 0.0,
60
+ # fill non-speech regions shorter than that many seconds.
61
+ "min_duration_off": 0.0
62
+ }
63
+ pipeline.instantiate(HYPER_PARAMETERS)
64
+ vad = pipeline("audio.wav")
65
+ # `vad` is a pyannote.core.Annotation instance containing speech regions
66
+ ```
67
+
68
+ ### Overlapped speech detection
69
+
70
+ ```python
71
+ from pyannote.audio.pipelines import OverlappedSpeechDetection
72
+ pipeline = OverlappedSpeechDetection(segmentation=model)
73
+ HYPER_PARAMETERS = {
74
+ # remove overlapped speech regions shorter than that many seconds.
75
+ "min_duration_on": 0.0,
76
+ # fill non-overlapped speech regions shorter than that many seconds.
77
+ "min_duration_off": 0.0
78
+ }
79
+ pipeline.instantiate(HYPER_PARAMETERS)
80
+ osd = pipeline("audio.wav")
81
+ # `osd` is a pyannote.core.Annotation instance containing overlapped speech regions
82
+ ```
83
+
84
+ ## Citation
85
+
86
+ ```bibtex
87
+ @inproceedings{Plaquet23,
88
+ author={Alexis Plaquet and Hervé Bredin},
89
+ title={{Powerset multi-class cross entropy loss for neural speaker diarization}},
90
+ year=2023,
91
+ booktitle={Proc. INTERSPEECH 2023},
92
+ }
93
+ ```
94
+
95
+ ```bibtex
96
+ @inproceedings{Bredin23,
97
+ author={Hervé Bredin},
98
+ title={{pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe}},
99
+ year=2023,
100
+ booktitle={Proc. INTERSPEECH 2023},
101
+ }
102
+ ```
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da85c29829d4002daedd676e012936488234d9255e65e86dfab9bec6b1729298
3
+ size 5905440