This is actually a nice improvement, because 1) you can train "Hey Siri" to your own personal voice rather than it being triggered by anything that remotely resembles the phrase, and 2) IIRC they mentioned in the keynote that the always-on detection is now possible because the motion coprocessor is now integrated into the same chip with the main CPU, so they can detect phrases efficiently as an always-on feature (locally on the device, of course).
As to whether they would send any of the detected phrases to the "government", I think that is extremely unlikely both for technical and Apple's ideological privacy reasons.
This is actually a nice improvement, because 1) you can train "Hey Siri" to your own personal voice rather than it being triggered by anything that remotely resembles the phrase, and 2) IIRC they mentioned in the keynote that the always-on detection is now possible because the motion coprocessor is now integrated into the same chip with the main CPU, so they can detect phrases efficiently as an always-on feature (locally on the device, of course).
As to whether they would send any of the detected phrases to the "government", I think that is extremely unlikely both for technical and Apple's ideological privacy reasons.
All ideologies have a price.