![]() |
|
![]() |
| What makes you think that internal access control at Apple is any better than Google's, Microsoft's or OpenAI's? Google employees have long reported that you can't access user data with standard credentials, for example.
Also, what makes you think that Apple's investments on chip design and OS is superior to Google's? Google is known for OpenTitan and other in-house silicon projects. It's also been working in secure enclave tech (https://news.ycombinator.com/item?id=20265625), which has been open-source for years. You're making unverifiable claims about Apple's actual implementation of the technical systems and policies it is marketing. Apple also sells ads (App Store, but other surfaces as well) and you don't have evidence that your AI data is not being used to target you. Conversely, not all user data is used by Google for ad targeting. |
![]() |
| If you can live without a cellphone, you're not living in reality? Interesting argument.
I wonder how all those people did it in the 90s and 00s and before the age of smartphones. |
![]() |
| I worked for Google for almost 14 years. Never did they, any other engineer, or even product manager I know of, ever suggest to snoop into cloud customer data, especially those using Shielded VMs and Customer Managed Encryption Keys for attached storage (https://cloud.google.com/kubernetes-engine/docs/how-to/using...). I've never seen even the slightest hint, and the security people at Google are incredibly anal to a T about the design and enforcement of these things.
This stuff is all designed so that even an employee with physical access to the machine would find it very difficult to get data. It's encrypted at rest by customer keys, stored in enclaves in volatile RAM. If you detached the computer or disk, you'd lose access. You'd have to perform an attack by somehow injecting code into the running system. But Shielded VMs/GKE instances makes that very hard. I am not a Google employee anymore but this common tactic of just throwing out "oh, their business model contains ad model ergo, they will sell anything and everything, and violate contracts they sign to steal private data from your private cloud" is a bridge too far. |
![]() |
| It's impossible to use any technology if you don't trust anyone.
Any piece of technology MAY have a backdoor or secondary function you don't know of and can't find out without breaking said device. |
![]() |
| Disclaimer: I used to work on Google Search Ads quality models
> Google obviously tracks everything you do for ads, recommendations, AI, you name it. They don’t even hide it, it’s a core part of their business model. This wasn't the experience I saw. Google is intentional about which data from which products go into their ads models (which are separate from their other user modeling), and you can see things like which data of yours is used in ads personalization on https://myadcenter.google.com/personalizationoff or in the "Why this ad" option on ads. > and it’s very much not necessary or even a sound business idea for them to do something else I agree that Apple plays into privacy with their advertising and product positioning. I think assuming all future products will be privacy-respecting because of this is over-trusting. There is _a lot_ of money in advertising / personal data |
![]() |
| Do you not remember Edward Snowden? Eg this sort of info:
> The scandal broke in early June 2013, external when the Guardian newspaper reported that the US National Security Agency (NSA) was collecting the telephone records of tens of millions of Americans. > The paper published the secret court order directing telecommunications company Verizon to hand over all its telephone data to the NSA on an "ongoing daily basis". https://www.bbc.com/news/world-us-canada-23123964 You seem to think that 10 years, under cover of secret orders, that this is NOT going on now. Not Apple! People's lovely trusting natures in corporations and government never ceases to amaze me. |
![]() |
| Fair, go ahead and expect the worse, and handwave away any attempts to mitigate.
But I'm not sure where that leaves you. Is it just a nihilistic "no security matters, it's all a show" viewpoint? |
![]() |
| It is as complicated as I make it sound. Technically, it's trivial, of course.
But operationally it is incredibly complicated to deliver and operate this kind of false attestation at massive scale. |
![]() |
| This is what the “attestation” bit is supposed to take care of—if it works, which I’m assuming it will, because they’re open sourcing it for security auditing. |
![]() |
| This isn't right.
If you trust math you can prove the software is what they say it is. Yes it is work to do this, but this is a big step forward. |
![]() |
| No, this really isn't right.
To quote: verifiable transparency, goes one step further and does away with the hypothetical: security researchers must be able to verify the security and privacy guarantees of Private Cloud Compute, and they must be able to verify that the software that’s running in the PCC production environment is the same as the software they inspected when verifying the guarantees. So how does this work? > The PCC client on the user’s device then encrypts this request directly to the public keys of the PCC nodes that it has first confirmed are valid and cryptographically certified. This provides end-to-end encryption from the user’s device to the validated PCC nodes, ensuring the request cannot be accessed in transit by anything outside those highly protected PCC nodes > Next, we must protect the integrity of the PCC node and prevent any tampering with the keys used by PCC to decrypt user requests. The system uses Secure Boot and Code Signing for an enforceable guarantee that only authorized and cryptographically measured code is executable on the node. All code that can run on the node must be part of a trust cache that has been signed by Apple, approved for that specific PCC node, and loaded by the Secure Enclave such that it cannot be changed or amended at runtime. But why can't a 3-letter agency bypass this? > We designed Private Cloud Compute to ensure that privileged access doesn’t allow anyone to bypass our stateless computation guarantees. > We consider allowing security researchers to verify the end-to-end security and privacy guarantees of Private Cloud Compute to be a critical requirement for ongoing public trust in the system.... When we launch Private Cloud Compute, we’ll take the extraordinary step of making software images of every production build of PCC publicly available for security research. This promise, too, is an enforceable guarantee: user devices will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software. So your data will not be sent to node that are not cryptographically attested by third parties. These are pretty strong guarantees, and really make it difficult for Apple to bypass. It's like end-to-end encryption using the Signal protocol: relatively easy to verify it is doing what is claimed, and extraordinarily hard to bypass. Specifically: > The only thing the math tells you is that the server software gave you a correct key. No, this is secure attestation. See for example https://courses.cs.washington.edu/courses/csep590/06wi/final... which explains it quite well. The weakness of attestation is that you don't know what the root of trust is. But Apple strengthens this by their public inspection and public transparency logs, as well as the target diffusion technique which forces an attack to be very widespread to target a single user. These aren't simple things for a 3LA to work around. |
![]() |
| Is there more to that thread? I can't read it if it exists, not sure if that is what the parent is talking about? But i don't have a Twitter account anymore, so maybe it's locked? |
![]() |
| Not even that anymore, all links show is "Something went wrong, but don’t fret — let’s give it another shot."
Impossible to see any content. |
![]() |
| If you have no threat model and want to opt out of random features just because... you probably shouldn't use Apple products at all. Or Google or Microsoft. |
![]() |
| He's not wrong that, given that you want to do this, this is the best way. The alternative would be to not do it at all (though an opt-out would have been good). |
![]() |
| weird, i get a bunch of music and programming stuff on my Threads feed. it's not very deep, but what's on the surface is quite nice and not a bunch of almost-porn. Twitters become half porn though |
![]() |
| That’s a good point. It would be interesting if there was a “git blame” style command, but that showed a trust score for every line/block based on who has touched it. |
![]() |
| Good luck. It’s much easier to talk about it. The last open OS I have seen reach a semi-mainstream level of adoption was started in the early 90’s, more than 30 years ago, by some Linus guy. |
![]() |
| And (basically) nobody running linux is individually verifying the source code of every little piece of software that goes into it (maybe Linus is), so you're still trusting someone. |
![]() |
| > If you have some sufficiently isolated machines, they're probably fine indefinitely.
The eternal dream of unplugging, and living free on Amigas. |
![]() |
| This was my take from the presentation as well, immediately thought of your feature. Will be interesting to hear your take on it once the details have been made available and fully understood. |
![]() |
| Well, a 89-day "update-and-revert" schedule will take care of those pesky auditors asking too many questions about NSA's backdoor or CCP's backdoor and all that. |
![]() |
| It makes zero sense for a company of this size. I bet they are served with gag orders like daily, so the warrant canary is going to expire the moment it is published. |
![]() |
| The orders in question aren’t search warrants and don’t require probable cause.
70,000+ Apple user accounts are surveilled in this manner every year. |
![]() |
| What Apple can do (and appears to be doing throughout its products) is not have the data requested. Or not have it in cleartext. NSLs can't request data that doesn't exist anymore. |
![]() |
| It seems to me that this security architecture is a direct response to the hostile regulatory environment Apple finds themselves in wrt USA PATRIOT and the CCP et al. |
![]() |
| > even if we can't verify that their sanctioned use case is secure, the cloud OS could be a great step forward in secure inference and secure clouds, which people could independently host or build an independent derivative of
Yes, the tech industry loves to copy Apple :) Asahi Linux has a good overview of on-device boot chain security, https://github.com/AsahiLinux/docs/wiki/Apple-Platform-Secur... > My main concern right now is that if virtualization is employed in their actual deployment, there could be a backdoor that passes keys from secure enclaves in still-proprietary parts of the OSes running on user devices to a hypervisor we didn't audit that can access the containers.
This seems to imply that PCC nodes are bare-metal.Could a PCC node be simulated on iPad Pro with M4 Apple Silicon? |
![]() |
| Thank you for mentioning this. I thought I was going crazy, because I heard this too, but kept seeing comment after comment on other sites asking if a person could choose not to use OpenAI, or that it was happening magically in the background. The way I heard it, the user was in control.
I think this goes back to what Steve said in 2010. https://youtube.com/watch?v=Ij-jlF98SzA And yes, while the data might not be linked to the user and striped of sensitive data, I could see people not wanting something very personal things to go to OpenAI, even if there should be no link. For example, I wouldn’t want any of my pictures going to OpenAI unless I specifically say it is OK for a given image. |
![]() |
| I was under the impression that the OpenAI integrations were more about content generation and correction than the Apple Intelligence-driven personal stuff. |
![]() |
| > How are they going to pay for all of that compute?
Hardware sales. Only the latest pro/max models will run these models, everyone else is going to have to upgrade. |
![]() |
| If you're one of the richest companies in history you can "simply" invest 15 years into developing your own chips instead of buying Nvidia GPUs. |
![]() |
| > simply invest 15 years into developing your own chips instead of buying Nvidia GPUs
https://www.notebookcheck.net/Apple-and-Imagination-strike-G...
https://9to5mac.com/2020/01/01/apple-imagination-agreement/
|
https://www.devever.net/~hl/webcrypto
And to be fair, this doesn't apply only to this case. Even the data you have stored locally, Apple could access it if they wanted, they sure have power to do it if they so wish or were ordered by the government. They might have done it already and just didn't told anyone for obvious reasons. So, I would argue the best you could say is that it's private in the sense that only Apples knows/can know what you are doing rather than a larger number of entities .
Which, you could argue it's a win when the alternatives will leak your data to many more parts... But still far away from being this unbreakable cryptography that it's portrayed it to be.