Why Can’t Users Teach Siri about Its Mistakes?

I have a speech disability and, unfortunately, Siri doesn’t understand a word I say. It’s almost amusing when I think of how wrong Siri has gotten me at times or has totally failed to reply at all, it’s almost as if she was embarrassed by not understanding. It would be nice to be able to tell Siri what she (or it) got wrong but that HAS to be done in writing because otherwise she wouldn’t understand and it would be like the snake eating it’s own tail!

Funny thing, I do that, too!

My husband and I would definitely like to find a way to let Siri know when it has been improperly invoked by his voice. It is regularly invoked by my husband’s voice on both of our iPhones, and sometime on our iPads, too. It happens up close and far away, and sometimes even virtually. It happens when the things he says are not even close to, “Hey Siri!” It has to stop!

On the other hand, frequently, Siri can’t hear or understand me to save my life. I would love to be able to have the option to rate Siri’s accuracy (much like we are asked to rate the usefulness of voicemail transcription or Facebook translations) - on a case by case basis, not all the time.

It would be really great if Apple would obtain Nuance’s speech recognition software…

We have our ups and downs with Siri, too. One thing I have not found out till today is, why Siri doesn’t invoke, when my wife says Hey Siri, but always invokes, when I say Hey Siri.
Does Siri has a voice recognition function, that it only invokes, when it detects it’s Masters voice ? :smiley: Could I train Siri to recognize different voices?

I thought Siri was specific to your voice on your phone? My SO and I don’t trigger each others phones.

Diane

Interesting thought. That would explain and raise 4 questions:

How does Siri learn my voice? I don’t recall any training session.

Could we actively train Siri to listen to my wife’s voice ? My wife’s iPhone is set up with my Apple ID, the only one we want to have.

Could I retrain Siri on my voice and how?

Does that imply that the voice recognition is Apple ID based, which means, on any shared device like iPad at home or home pod for that matter only one person in the household is able to trigger Siri?

There has long been some training of Siri for the Hey Siri feature, but I think that’s it.

There is a training session when you set up Hey Siri, where you say “Hey, Siri!” and a few other things several times.

Looks like Apple’s contractors were processing about two requests per minute. So Apple would need even more people.

A post was merged into an existing topic: Nonstop whining about how Apple sucks

So here’s my question about this idea: If Apple did this, would their competitors hire a bunch of cheap labor to input a bunch of inaccurate corrections in order to make Siri worse?

1 Like

Fair question, and not one I’d considered. That said, there are lots of ways that one company could use another company’s public feedback mechanisms in a sort of denial-of-service way, and I haven’t heard of that happening before. It’s probably (a) not worth the effort and (b) a dangerous tactic that could result in problematic escalation.

I’m just wondering if this isn’t part of why it’s not already happening. This seems more vulnerable to such an attack than other mechanisms, as it’s unlikely that most submissions will be reviewed (or it kind of defeats the purpose).

I doubt we’ll ever know, but it seems like too juvenile of a behavior for real companies to engage in. The negative press to the attacker if it leaked (and it would leak, because of the low pay involved with the manual effort of polluting a data set) would vastly overshadow any possible benefits. And any company that thought its feedback mechanism was being polluted would just put mechanisms in place to eliminate the pollution or would ignore the feedback info entirely. So the only downside to the victimized company would be a slight cost or loss of legitimate feedback among the noise.

Voice Control in iOS 13 and Catalina will let you do this! From my early testing, you have to be dictating using Voice Control, not Siri dictation (from tapping the mic button on the keyboard).

I’ve got both betas. Can you tell us more and how to actually do it?

I know it is possible to do a lot more including corrections with Dragon Dictate but the learning curve is a bit steep. And it is expensive. But used by a professional every day the investment makes sense.

Turn on Voice Control in Settings > Accessibility > Voice Control in iOS, and in System Preferences > Accessibility > Voice Control in macOS.

Then, on the Mac, click the Commands button in that preference pane, and scroll down until you get to all the Text commands. There are a ton. I’ve had very little time testing, but from what I can tell, they work the same on both operating systems.

So you’d get a text area open while Voice Control was on, and then say something like:

Four score and seventeen years ago change seventeen to seven our fathers brought forth on this this continent comma

and so on and so on. My initial test of Voice Control was launching Text Edit, creating a new document, reading the first line of the Gettysburg Address into it, and then saving and naming the document, all with my voice. It was brilliant.

I’ve long said that what I want is a “Dammit, Siri!” command. Whenever you say “Dammit, Siri” it should figure out what the last thing it did was and never do that in response to that input ever again.

Perfect! The other night I was trying to text “you need a backup app” to my boyfriend (his NBC radar wasn’t working) and Siri replied “your message says “do you need a bath?”” - twice!!

Diane

You can actually use HS to turn Voice Control on and off.