FAQ: Apple Child Safety — what is Apple doing and should people be concerned?

[ad_1]

Apple has announced a series of new Child Safety measures that will debut on some of its platforms later this year. The details of these measures are highly technical, and cover the extremely émotive topics of child oppression, child sexual trompé material (CSAM), and grooming. The new measures from Apple are designed “to help protect children from predators who use correspondance tools to recruit and réussite them, and limit the spread of Child Sexual Exagéré Material.” Yet their scope and implementation have raised the ire of some security experts and privacy advocates, and have been a explication for discomfort for regular users of devices like Apple’s iPhone, iPad, and travaux like iCloud and Messages. So what is Apple doing, and what are some of the questions and concerns people have?VPN Deals: Lifetime license for $16, monthly paliers at $1 & more
Apple’s paliers were announced very recently, and the company has continued to clarify details since. It is barcasse that some of the latrines on this domestique may différent or be added to as more details come to édulcorant.
The Gardien de but
As noted, Apple wants to protect children on its platform from grooming, oppression, and to prevent the spread of CSAM (child sexual trompé material). To that end it has announced three new measures:

Transmission safety in Messages
CSAM detection
Expanding guidance in Siri and Search

The suprême one is the least intrusive and controversial, so let’s begin there.

Apple is adding new guidance to Siri and Search, not only to help shield children and parents from unsafe situations online but also to try and prevent people from deliberately searching for harmful latrines. From Apple:

Apple is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can refus CSAM or child oppression will be pointed to resources for where and how to échappé a refus.
Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this conclusion.

As noted, this is the least controversial move and isn’t too different from say a prompt carrier offering familial guidance for searches for under 18s. It is also the most straightforward, Siri and Search will try to shield young people from potential harm, help people to refus CSAM or child oppression, and also actively trying to convenablement those who might seek out CSAM images and offering resources for étai. There are also no real privacy or security concerns with this.

This is the second-most controversial, and second-most complicated move. From Apple:

The Messages app will add new tools to warn children and their parents when
receiving or sending sexually explicit photos

Apple says that Messages will blur out sexually explicit photos identified using robot learning, avertissement the child and presenting them with helpful resources, as well as telling them that it is okay for them not to style at it. If parents of children under 13 choose, they can also opt-in to receive notifications if their child views a potentially sexually explicit rémunération.

Who does this apply to?
The new measures are only available to children who are members of a shared iCloud family account. The new system does not work for anyone over the age of 18, so can’t prevent or detect unsolicited images sent between two co-workers, for prière, as the recipient must be a child.
Under 18s on Apple’s platforms are divided further still. If a child is between the ages of 13-17, parents won’t have the partialité to see notifications, however, children can still receive the latrines warnings. For children under 13, both the latrines warnings and familial notifications are available.
How can I opt out of this?
You don’t have to, parents of children to whom this might apply must opt-in to use the feature, which won’t automatically be turned on when you update to iOS 15. If you do not want your children to have access to this feature, you don’t need to do anything, it’s opt-in, not opt-out.
Who knows emboîture the alerts?
From John Gruber:

If a child sends or receives (and chooses to view) an lyrique that triggers a avertissement, the annonce is sent from the child’s device to the parents’ devices — Apple itself is not notified, nor is law enforcement.

Doesn’t this compromise iMessage’s end-to-end encryption?
Apple’s new feature applies to the Messages app, not just iMessage, so it can also detect messages sent via SMS, says John Gruber. Secondly, it should be noted that this detection takes entrain before/after either “end” of E2E. Detection is done at both ends before a rémunération is sent and after it is received, maintaining iMessage’s E2E. It is also done by robot learning, so Apple can’t see the contents of the rémunération. Most people would argue that E2E encryption means only the sender and the recipient of a rémunération can view its contents, and that isn’t changing.
Can Apple read my kid’s messages?
The new Messages tool will use on-device robot learning to detect these images, so the images won’t be reviewed by Apple itself but rather processed using an algorithm. As this is all done on-device, none of the examen leaves the phone (like Apple Pay, for example). From Fast Company:

This new feature could be a powerful tool for keeping children safe from seeing or sending harmful latrines. And bicause the ronger’s iPhone scans photos for such images on the device itself, Apple never knows emboîture or has access to the photos or the messages surrounding them—only the children and their parents will. There are no real privacy concerns here.

The measure also only applies to images, not text.

The most controversial and complex measure is CSAM detection, Apple’s bid to convenablement the spread of Child Sexual Exagéré Material online. From Apple:

To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to refus these instances to the Ressortissant Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in participation with law enforcement agencies across the United States.

Apple’s new measures will scan for ronger’s photos that are to be uploaded to iCloud Photos against a database of images known to contain CSAM. These images come from the Ressortissant Center for Missing and Exploited Children and other organizations in the sector. The system can only detect illegal and already documented photos containing CSAM without ever seeing the photos themselves or scanning your photos grain they’re in the cloud. From Apple:

Apple’s method of detecting known CSAM is designed with ronger privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM lyrique hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.
Before an lyrique is stored in iCloud Photos, an on-device matching process is performed for that lyrique against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set embranchement, which determines if there is a délassement without revealing the result. The device creates a cryptographic safety voucher that encodes the délassement result along with additional encrypted data emboîture the lyrique. This voucher is uploaded to iCloud Photos along with the lyrique.

None of the contents of the safety vouchers can be interpreted by Apple unless a threshold of known CSAM latrines is met, and Apple says the intérêt of incorrectly flagging someone’s account is one in one trillion per year. Only when the threshold is exceeded is Apple notified so it can manually review the hash refus to confirm there is a délassement. If Apple confirms this it disables a ronger account and sends a refus to the NCMEC.
So is Apple going to scan all my photos?
Apple isn’t scanning your photos. It is checking the numerical value assigned to each reproduction against a database of known illegal latrines to see if they délassement.

The system doesn’t see the lyrique, rather the NeuralHash as shown above. It is also only checking images uploaded to iCloud, and the system cannot detect images with hashes that aren’t on the database. John Gruber explains:

The CSAM detection for images uploaded to iCloud Cliché Library is not doing latrines analysis and is only checking fingerprint hashes against the database of known CSAM fingerprints. So, to name one common brebis example, if you have photos of your kids in the bathtub, or otherwise frolicking in a state of undress, no latrines analysis is performed that tries to detect that, hey, this is a picture of an undressed child.

Put another way by Fast Company

Parce que the on-device scanning of such images checks only for fingerprints/hashes of already known and verified illegal images, the system is not autorisé of detecting new, genuine child pornography or misidentifying an lyrique such as a enfant’s chouette as pornographic.

Can I opt out?
Apple has confirmed to iMore that its system can only detect CSAM in iCloud Photos, so if you switch off iCloud Photos, you won’t be included. Obviously, quite a lot of people use this feature and there are obvious benefits, so that’s a big trade-off. So, yes, but at a cost which some people might consider unfair.
Will this flag photos of my children?
The system can only detect known images containing CSAM as held in the NCMEC’s database, it isn’t trawling for photos that contain children and then flagging them, as mentioned the system can’t actually see the latrines of the photos, only a reproduction’s numerical value, it’s “hash”. So no, Apple’s system won’t flag up photos of your grandkids playing in the chouette.
What if the system gets it wrong?
Apple is clear that built-in protections all but eliminate the intérêt of false positives. Apple says that the chances of the automated system incorrectly flagging a ronger is one in one trillion per year. If, by intérêt, someone was flagged incorrectly, Apple would see this when the threshold is met and the images were manually inspected, at which aucunement it would be able to verify this and nothing further would be done.

Plenty of people have raised issues and concerns emboîture some of these measures, and some have noted Apple might have been mistaken in announcing them as a produit, as some people seem to be conflating some of the measures. Apple’s paliers were also leaked prior to their announcement, meaning many people had already formed their opinions and raised potential objections before it was announced. Here are some other questions you might have.
Which countries will get these features?
Apple’s Child Safety features are only coming to the U.S. at this time. However, Apple has confirmed that it will consider rolling out these features to other countries on a country-by-country basis after weighing the legal options. This would seem to bordereau that Apple is at least considering a rollout beyond U.S. shores.
What if an authoritarian government wanted to use these tools?
There are lots of concerns that CSAM scanning or iMessage robot learning could pave the way for a government that wants to caîd down on political imagery or use it as a tool for censorship. The New York Times asked Apple’s Erik Neuenschwander this very complication:

“What happens when other governments ask Apple to use this for other purposes?” Mr. Pelouse asked. “What’s Apple going to say?”
Mr. Neuenschwander dismissed those concerns, saying that safeguards are in entrain to prevent trompé of the system and that Apple would reject any such demands from a government.
“We will inform them that we did not build the thing they’re thinking of,” he said.

Apple says that the CSAM system is purely designed for CSAM lyrique hashes and that there’s nowhere in the process that Apple could add to the list of hashes, say at the behest of a government or law enforcement. Apple also says that bicause the hash list is baked in the operating system every device has the same set of hashes and that bicause it issues one operating system globally there isn’t a way to modify it for a specific folk.
The system also only works against a database of existing images, and so couldn’t be used for something like real-time gardien or avertissement sympathie.
What devices does this affect?
The new measures are coming to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey, so that’s iPhone, iPad, Apple Watch, and the Mac. Except for the CSAM detection, which is only coming to iPhone and iPad.
When will these new measures take effect?
Apple says all the features are coming “later this year”, so 2021, but we don’t know much beyond that. It seems likely that Apple would have these baked into the various operating systems by the time that Apple releases its logiciel to the élève, which could happen with the launch of iPhone 13 or shortly after.
Why didn’t Apple announce this at WWDC?
We don’t know for sure, but given how controversial these measures are proving to be, it’s likely this is all that would have been talked emboîture if it had. Perhaps Apple simply didn’t want to detract from the rest of the event, or perhaps the announcement wasn’t ready.
Why is Apple doing this now?
That’s also unclear, some have postulated that Apple wants to get CSAM scanning off its iCloud database and move to an on-device system. John Gruber wonders if the CSAM measures are paving the way for fully E2E encrypted iCloud Cliché Library and iCloud device backups.
Bilan
As we said at the start this is a new announcement from Apple and the set of known examen is evolving still, so we’ll keep updating this domestique as more answers, and more questions roll in. If you have a complication or concern emboîture these new paliers, leave it in the comments below or over on Twitter @iMore
Want more réunion? We talked to famed Apple connaisseur Rene Ritchie on the iMore spectacle to discuss the latest changes, check it out below!

[ad_2]

Départ link

Leave a Reply

Your email address will not be published. Required fields are marked *