Currently franc to me often returns a probability close to 1 for many languages, IMO all these probabilities should be normalized to add up to 1.
Also there seems to always be a language at the top with probability 1, this makes it difficult to judge how sure the "model" is about the detection, which would be another interesting point of data to have.
Currently franc to me often returns a probability close to 1 for many languages, IMO all these probabilities should be normalized to add up to 1.
Also there seems to always be a language at the top with probability 1, this makes it difficult to judge how sure the "model" is about the detection, which would be another interesting point of data to have.